I'm looking for some projects using Protobuf to use as a reference for implementing complex protocols. Netty examples are preferred but not required.
ActiveMQ uses the protobuf for the underlying KahaDB Message store
The LocalTime example that comes with Netty helped me a lot to understand how to use protobuf, try to learn from it.
Link: here
Related
I am trying to find out how to use SpEL (or any other expression language) in my Quarkus applications to handle basic object transformation via injected configuration (and run on Graal).
I am not sure if/what is possible here and I cant find much in the way of docs or how-tos.
I see that this can be possibly be used via the Apache Camel extension, but right now that would be overkill for this requirement.
Any pointers or guidance would be appreciated.
Thanks.
Not much - I cant find docs, examples or howtos for this anywhere.
This is not supported in core Quarkus
Does anyone know hot to use Apache AVRO RPC with Spring boot? Every single AVRO implementation I have seen is hosted on a netty server.
You might be trying to achieve the same thing I'm trying to achieve -- speeding up json serialization with spring boot and spring web. Or maybe you just want to use avro? And my comment is a little late, since it's months after you posted. I have run across this information about using an avro message converter, so thought I would share it with you to see if it helps:
https://docs.spring.io/spring-cloud-stream/docs/Brooklyn.RELEASE/reference/html/contenttypemanagement.html#_schema_based_message_converters
Or did you already find it? If so, can you share whatever solution that you came up with? Our rest json serialization takes much longer than the whole rest of the operation and I would like to speed it up as much as possible.
I'm writing an application that uses Apache Spark. For communicating with a client, I would like to use gRPC.
In my Gradle build file, I use
dependencies {
compile('org.apache.spark:spark-core_2.11:1.5.2')
compile 'org.apache.spark:spark-sql_2.11:1.5.2'
compile 'io.grpc:grpc-all:0.13.1'
...
}
When leaving out gRPC, everything works fine. However, when gRPC is used, I can create the build, but not execute it, as various versions of netty are used by the packages. Spark seems to use netty-all, which contains the same methods (but with potentially different signatures) than what gRPC uses.
I tried shadowing (using com.github.johnrengelman.shadow) , but somehow it still does not work. How can I approach this problem?
The general solution to this sort of thing is shading with relocation. See the answer to a similar problem with protobuf dependencies: https://groups.google.com/forum/#!topic/grpc-io/ABwMhW9bU34
I think the problem is that spark uses netty 4.0.x and gRPC 4.1.0 .
I want my Swagger output to have support for Protobuf and Websockets. (Protobuf is the most important)
Based on this issue, I don't think that swagger can support that. My goal is to allow end users to see the value of the Protobuf format since they are all asking me to use JSON instead.
I would contribute myself, but I'm unfamiliar with the swagger project at that level.
Question
Is there any way to get Swagger to support Protobuf or WebSockets?
We have been thinking recently about integrating our J2EE system with other applications written in Python/Perl. Our application integrates very nice with other Java systems over JMS. Is it possible that non-java systems will receive Serializable messages and do some modification on it (at some level every class property is java primitive type)? Also we would like to do it in the other direction, e.g. python application constructs object which then will be sent over JMS and modified (at least understandable) by our java app. Do you have any experience in this topic / hints for us?
Thanks in advance,
Piotr
You don't want to use Serializeable objects for this. You'll need a more portable format, such as a text based format like XML or JSON or CSV. It's simply not worth the effort to try and read serialized java object on other platforms.
Now you could use another binary format, such as the Google format (protocol buffers I think it's called). You can also change your java classes, specifically the ones that you plan to exchange, and you can implement the Externalizable interface. This let's you have full control over the reading and writing of your java classes. That way you can still use the java serialization protocol and workflow, but write and read a more portable format.
This let's you incrementally add support for the Python system without really disturbing the rest of the system, especially for messaging, as long as there are no legacy messages to be processed in your queues when you switch over.