Does anyone has a good comparison between the two technologies: Thrift vs JMS, in Java for messaging services? Thanks!
Thrift is serialization/rpc framework whereas JMS is full-featured messaging service, so they don't compare directly.
JMS uses stardart java object serialization for ObjectMessage's. It's much slower than thrift and can't be deserialized without java. However, it's possible to serialize objects with any other framework and send them in the form of BytesMessage. For example, ActiveMQ encourages usage of protocol buffers for this task, but it's possible to use any other framework, like thrift, avro, kryo or smile.
This page might give an insight on speed/serialization size of different technologies:
http://code.google.com/p/thrift-protobuf-compare/wiki/Benchmarking
In general, each of modern serialization frameworks has its pros and cons, but they provide approximately the same performance.
Related
I'm trying to use golang to create a kafka stream client in Go. From what I have seen this is only possible if using a Java Client. I did a bit of searching and found a few other third party libraries but nothing official. Also from my limited understanding I think streams is syntactic sugar over the standard consumers ? is this correct ?
To answer your this particular question,
Also from my limited understanding I think streams is syntactic sugar
over the standard consumers ? is this correct?
When implementing asynchronous microservices, we could use producer and consumer APIs but these APIs are too low level, they were good to understand how to use Kafka, but if we want to implement more complex applications, they might be too low level. Also, when we develop event‑driven applications, we might need to implement multistage processing when we would connect multiple stages. At each stage, we will read events from Kafka, process them, and write to output topics. Again, using producer and consumer APIs would be quite a lot of work. And one of the more high‑level solutions we can use is called Kafka Streams. Kafka Streams is a very versatile library, it supports stateless stream processing and it also supports stateful processing.
Note: And if you have the option of working with a language other than Go, I would highly recommend working with Java for Kafka Streams. Just to mention here we have been working with Kafka Streams using Java for the last two years and we have felt Kafka Streams is more of a Java library than a distributed system.
I have just read about CORBA and JMS, they both seem to be used to implement
Broker Architecture/Pattern.
I have few questions regarding them
1.The differences between them are still not clear to me, anybody please explain ?
2.Is CORBA is used in today's IT Solutions ? Or is it losing charm ?
3.Does JMS can replace every aspect of CORBA ?
Ramon Gil Moreno is right in stating that
JMS is the Java API that allows building applications to send and
receive messages. IBM MQ or ActiveMQ are samples of JMS vendors that
implements this API.
CORBA on the other hand is a specification that specifies how objects can interact with each other over a network across programming languages and run-time platforms.
The standard includes many APIs and infrastructure definitions (language bindings, marshalling, naming etc.), that are needed to support this. CORBA is still being used, and is Open Source as well as commercial (hard to find!)
Implementations exist, but I doubt if any of them covers 10% of the standard. Ramon's statement that CORBA is closer to RMI is a bit too simple - CORBA 2.4+ definitions include a CORBA Messaging definition that allows asynchronous and (reliable) queued communication.
CORBA, which is not hot nowdays, allows objects to be used remotely by different systems. It is more similar to RMI.
JMS is the Java API that allows building applications that send and receive messages. IBM MQ or ActiveMQ are samples of products that implements this API.
We currently use JMS API with ActiveMQ broker, looking over to move to RabbitMQ.
Compared to openWire vs amqp which one would give best performance with java client-producer. I Couldn't find a comparison study on amqp(RabbitMQ) vs openwire (activeMQ) native protocol. I'm looking in terms of raw performance and ease of scalability.
We currently use Spring Integration for ActiveMQ, I would like to know if its a drastic change to move to RabbitMQ(AMQP) even with Spring Integration. Is there any bridge similar to what ActiveMQ uses to do JMS<->AMQP forwarding ?
ActiveMQ also supports AMQP: http://activemq.apache.org/amqp.html
Both are binary protocols. Openwire is going to be more full-featured when using ActiveMQ.
For "raw performance" you'll have to nail down your use cases first. Chances are the protocol you choose (ampq vs openwire) is not going to make any difference from a "perfomance" standpoint.
ActiveMQ now implements AMQP.
You can specify multiple communication protocols and let ActiveMQ auto detect determine which to implement:
http://activemq.apache.org/auto.html
Our current implementation has an abstraction layer separating out (quite) a few interface apis like start, close, etc., essentially following the Template Pattern.
Is there a better way to do so?
Not an expert on Spring, but could Spring be our answers?
Short answer: No.
Longer answer: The APIs and protocols are different. Spring or similar frameworks won't help you. A common abstraction layer would be a subset, feature wise, of both AMQ(JMS) and RMQ (AMQP).
In theory, you could try to connect to RMQ with JMS (like the client JMS lib of Apache QPid). It won't support all features of AMQP, and last time I tried, I merly got a connection running. So don't go there. Or go by some common supported wire protocol, such as MQTT (very limited).
I think you are on the right way - write your own abstraction that supports the subset of features you need.
I'm developing a messaging system and I used JBoss Netty + Google protobuf for the POC. protobuf was chosen for its swiftness of serialization/deserialization, relatively low traffic cost and availability in several languages.
Still, when it comes to production under heavy load a self-coded server application can never be as good as well-established and tested frameworks.
The problem is, I can't find such a framework that would allow me using protobuf as a transport protocol. Apache ActiveMQ and ActiveBlaze are the closest things I could find but the documentation is nearly absent.
I stumbled upon something that is ActiveMQ protobuf implementation but there is no reminding of it in official ActiveMQ documentation (its not among the supported protocols).
So my question is whether AMQ supports protobuf and if it does how it can be integrated?
No, ActiveMQ uses its own OpenWire protocol or the Stomp protocol. The protobuf bits are used for the underlying KahaDB Message store not the wire level portion. You can store your protobuf data in a BytesMessage and transmit it that way to allow you to marshal and unmarshal the data on either end.