JMS / ActiveMQ: Sending an object with objects as class members - jms

I'm using ActiveMQ (with Spring) for sending messages to remote OSGi-Container.
This works very fine, but there is one issue.
I got two classes implementing Serializable. One class is the class member of the other class, like this:
public class Member implements Serializble {
private int someValue;
private static final long serialVersionUID = -4329617004242031635L;
... }
public class Parent implements Serializable {
private static final long serialVersionUID = -667242031635L;
private double otherValue;
private Member;
}
So, when is send a Parent instance, the Member of the Parent is null.
Hope you understand what my problem is :)
Edit: funny issue: I got a java.util.date in my class which is serialized correctly, but this is the only thing, all Doubles etc are null

If Objects are an option, you might go for something like this
Producer side:
SomeObject someObject = new SomeObject();
ObjectMessage objectMessage = session.createObjectMessage();
objectMessage.setObject(someObject);
producer.send(objectMessage);
Consumer side:
private class MessageConsumer implements MessageListener {
#Override
public void onMessage(Message message) {
logger.debug("onMessage() " + message);
if (message instanceof ObjectMessage) {
ObjectMessage objectMessage = (ObjectMessage) message;
SomeObject someObject = (SomeObject)objectMessage.getObject();
}
}
}

Serialized objects in byte messages are a bit hard to deal with.
I would go with object messages, as Aksel Willgert suggested, or simply take it to some more loosley coupled format, such as serialzied XML. I quick solution would be to use XStream to go to/from XML in a more loosely coupled fashion, a quick guide here: XStream
Update, and some code here (need to add xstream-.jar to your project)
// for all, instanciate XStream
XStream xstream = new XStream(new StaxDriver());
// Producer side:
TextMessage message = session.createTextMessage(xstream.toXML(mp));
producer.send(message);
// consumer side:
TextMessage tmsg = (TextMessage)msg;
Parent par = (Parent)xstream.fromXML(tmsg.getText());
par.getMember() // etc should work just fine.

Related

how to consume events from kafka by a Spring Rest endpoint

I'm new to Kafka. I've seen that the consumer is "always running" and retrieves messages from a topic as soon as been published.
In a typical database web application you have a rest API that connects to DB and returns some response.
From what I see the consumer stays active and never close.
So I don't figure out how to return a subset of messages from a topic based on client request.
I thought the service would create a consumer to get what I need, but as far as consumer never close, I guess my opinion is not correct.
What should I do?
Then it's a simple question of persisting messages rceived thru KafkaListener, let's say adding each of them to a simple collecton (along with its timestamp) and implementing an endpoint to filter the messages accordingly and returning some of them.
#Controller
public class KafkaController {
#Autowired
private KafkaProducerConfig kafkaProducerConfig;
private Map<Date, String> msgMap = new HashMap();
#KafkaListener(topics = "myTopic", groupId = "myGroup")
public void listenAndAddMsg(String message) {
msgMap.put(new Date(), message);
}
#PostMapping("messages")
#ResponseBody
public String filterMessages(#RequestBody Interval interval) {
return msgMap.entrySet()
.stream()
.filter(map -> map.getKey().after(interval.getStartDate()) && map.getKey().before(interval.getEndDate()))
.collect(Collectors.toMap(map -> map.getKey(), map -> map.getValue()));
}
}
public class Interval {
private Date startDate;
private Date endDate;
// setters and getters
}

Spring-Kafka: How to insert an application.yml topic in Producer Kafka

I have a spring-kafka microservice to which I recently added a dead letter to be able to send the various error messages
//some code..
#Component
public class KafkaProducer {
#Autowired
private KafkaTemplate<String, String> kafkaTemplate;
public void sendDeadLetter(String message) {
kafkaTemplate.send("myDeadLetter", message);
}
}
I would like to call the topic kafka of the dead letter as "messageTopic" + "_deadLetter", my main topic being "messageTopic". In my Consumer the topic name gives him the application.yml as follows:
#KafkaListener(topics = "${spring.kafka.topic.name}")
How can I set the same kafka topic by possibly inserting the "+ deadLetter" from the application.yml? I tried such a thing:
#Component
#KafkaListener(topics = "${spring.kafka.topic.name}"+"_deadLetter")
public class KafkaProducer {
#Autowired
private KafkaTemplate<String, String> kafkaTemplate;
public void sendDeadLetter(String message) {
kafkaTemplate.send("messageTopic_deadLetter", message);
}
}
but it creates me two different topics with the same name. I am waiting for some advice, thanks for the help!
Kafka Listener accepts constant for the Topic name, we can't modify the TOPIC name here.
Ideally good to go with separate methods (Kafka listeners) for actual topic and dead letter topic, define two different properties in YAML to hold two topic names.
#KafkaListener(topics = "${spring.kafka.topic.name}")
public void listen(......){
}
#KafkaListener(topics = "${spring.kafka.deadletter.topic.name}")
public void listenDlt(......){
}
To refer topic name inside send(...) from yml or property file
#Component
#KafkaListener(topics = "${spring.kafka.deadletter.topic.name}")
public class KafkaProducer {
#Value("${spring.kafka.deadletter.topic.name}")
private String DLT_TOPIC_NAME;
#Autowired
private KafkaTemplate<String, String> kafkaTemplate;
public void sendDeadLetter(String message) {
kafkaTemplate.send(DLT_TOPIC_NAME, message);
}
}
You can construct the topic name with SpEL:
#KafkaListener(topics = "#{'${spring.kafka.topic.name}' + '_deadLetter'"})
Note the single quotes around the property placeholder and literal.
This example may not be relevant to your use case, but sharing in case it's helpful to someone.
If you are building a Kafka Stream application, variable sink topic names can be achieved with the following:
When producing to the sink topic, pass a lambda that has the context as argument and the method that will handle the name definition.
... /* precedent stream operations */
// terminal operation 'to'.
.to(
(k, v, ctx) -> sinkTopicNameGenerator(ctx),
Produced.with(Serdes, Serdes)
);
Implement the method that generates the sink topic names:
protected static String sinkTopicNameGenerator(RecordContext ctx) {
return ctx.topic().concat("_deadLetter");
}
The above example is simple enough to be simplified to (k, v, ctx) -> ctx.topic().concat("_deadLetter"), but I wanted to keep the separate method approach for cases where further transformations are required, i.e. when part of the topic name will be replaced by some constant or regex defined in the config file.

How to get a piece of identifiable data when writing to an AWS SQS Queue?

I've got an app that needs to track the transaction id (or something similar) of a message sent to an SQS queue.
If I send with the following class, how could I get some piece of identifiable data with querying the queue?
#Component
public class SqsQueueDao {
private static final Logger LOGGER = LoggerFactory.getLogger(SqsQueueDao.class);
private final QueueMessagingTemplate queueMessagingTemplate;
#Autowired
#Qualifier("awsClient")
AmazonSQSAsyncClient amazonSQSAsyncClient;
public SqsQueueDao(AmazonSQSAsync amazonSQSAsyncClient) {
this.queueMessagingTemplate = new QueueMessagingTemplate(amazonSQSAsyncClient);
}
//TODO: implement a strategy for identifying the message id
public Long send(String queueName, String message) {
queueMessagingTemplate.convertAndSend(queueName, MessageBuilder.withPayload(message).build());
//return some long identifying data
}
}
SQS assigns a message ID, but the queueMessagingTemplate.convertAndSend method doesn't return anything. If you send the message using the SQS client directly then you would get a SendMessageResult object that would have the message ID on it. However the SQS message ID Is a String not a number, so you still wouldn't be able to fulfill your contract to return a Long.
If you can return a String message ID instead of a Long, then the code would look like this:
public String send(String queueName, String message) {
// Could probably cache this URL instead of looking up each time
String queueUrl = amazonSQSAsyncClient.getQueueUrl(queueName).getQueueUrl();
SendMessageResult result = amazonSQSAsyncClient.sendMessage(queueUrl, message);
return result.getMessageId();
}

Subscribing to spring-websocket messages internally

I am using spring-websocket to push messages to browser clients.
My setup is almost identical to the one in the portfolio example and I send out messages by using MessageSendingOperations:
MessageSendingOperations<String> messagingTemplate = //...;
this.messagingTemplate.convertAndSend("/topic/data/1", message);
This works perfectly.
But I would also like to be able to subscribe to the same messages internally.
MessageReceivingOperations almost looks like the one to use, but it only seems to support pulling messages. I would much prefer having the messages pushed to my service.
SubscribableChannel.subscribe() also looks promising, but how do I get hold of the correct channel?
I would really like to be able to call something like
messagingTemplate.subscribe("/topic/data/*",
new MessageHandler<String>{
public void handleMessage(String s){
// process message
}
});
The following works for me, but it would be nice with a more direct way to do it:
public interface MessageHandler<T> {
public void handleMessage(T message);
}
#Autowired
private AbstractSubscribableChannel brokerChannel;
private PathMatcher pathMatcher = new AntPathMatcher();
private <T> void subscribe(final String topic, final Handler<T> handler, final Class<T> messageClass){
brokerChannel.subscribe(new MessageHandler() {
#Override
public void handleMessage(Message<?> message) throws MessagingException {
SimpMessageHeaderAccessor headers = SimpMessageHeaderAccessor.wrap(message);
final String destination = headers.getDestination();
if(pathMatcher.match(topic, destination)) {
final T messageObject = (T) messageConverter.fromMessage(message, messageClass);
handler.handleMessage(messageObject);
}
}
});
}

ApacheConnector does not process request headers that were set in a WriterInterceptor

I am experiencing problems when configurating my Jersey Client with the ApacheConnector. It seems to ignore all request headers that I define in a WriterInterceptor. I can tell that the WriterInterceptor is called when I set a break point within WriterInterceptor#aroundWriteTo(WriterInterceptorContext). Contrary to that, I can observe that the modification of an InputStream is preserved.
Here is a runnable example demonstrating my problem:
public class ApacheConnectorProblemDemonstration extends JerseyTest {
private static final Logger LOGGER = Logger.getLogger(JerseyTest.class.getName());
private static final String QUESTION = "baz", ANSWER = "qux";
private static final String REQUEST_HEADER_NAME_CLIENT = "foo-cl", REQUEST_HEADER_VALUE_CLIENT = "bar-cl";
private static final String REQUEST_HEADER_NAME_INTERCEPTOR = "foo-ic", REQUEST_HEADER_VALUE_INTERCEPTOR = "bar-ic";
private static final int MAX_CONNECTIONS = 100;
private static final String PATH = "/";
#Path(PATH)
public static class TestResource {
#POST
public String handle(InputStream questionStream,
#HeaderParam(REQUEST_HEADER_NAME_CLIENT) String client,
#HeaderParam(REQUEST_HEADER_NAME_INTERCEPTOR) String interceptor)
throws IOException {
assertEquals(REQUEST_HEADER_VALUE_CLIENT, client);
// Here, the header that was set in the client's writer interceptor is lost.
assertEquals(REQUEST_HEADER_VALUE_INTERCEPTOR, interceptor);
// However, the input stream got gzipped so the WriterInterceptor has been partly applied.
assertEquals(QUESTION, new Scanner(new GZIPInputStream(questionStream)).nextLine());
return ANSWER;
}
}
#Provider
#Priority(Priorities.ENTITY_CODER)
public static class ClientInterceptor implements WriterInterceptor {
#Override
public void aroundWriteTo(WriterInterceptorContext context)
throws IOException, WebApplicationException {
context.getHeaders().add(REQUEST_HEADER_NAME_INTERCEPTOR, REQUEST_HEADER_VALUE_INTERCEPTOR);
context.setOutputStream(new GZIPOutputStream(context.getOutputStream()));
context.proceed();
}
}
#Override
protected Application configure() {
enable(TestProperties.LOG_TRAFFIC);
enable(TestProperties.DUMP_ENTITY);
return new ResourceConfig(TestResource.class);
}
#Override
protected Client getClient(TestContainer tc, ApplicationHandler applicationHandler) {
ClientConfig clientConfig = tc.getClientConfig() == null ? new ClientConfig() : tc.getClientConfig();
clientConfig.property(ApacheClientProperties.CONNECTION_MANAGER, makeConnectionManager(MAX_CONNECTIONS));
clientConfig.register(ClientInterceptor.class);
// If I do not use the Apache connector, I avoid this problem.
clientConfig.connector(new ApacheConnector(clientConfig));
if (isEnabled(TestProperties.LOG_TRAFFIC)) {
clientConfig.register(new LoggingFilter(LOGGER, isEnabled(TestProperties.DUMP_ENTITY)));
}
configureClient(clientConfig);
return ClientBuilder.newClient(clientConfig);
}
private static ClientConnectionManager makeConnectionManager(int maxConnections) {
PoolingClientConnectionManager connectionManager = new PoolingClientConnectionManager();
connectionManager.setMaxTotal(maxConnections);
connectionManager.setDefaultMaxPerRoute(maxConnections);
return connectionManager;
}
#Test
public void testInterceptors() throws Exception {
Response response = target(PATH)
.request()
.header(REQUEST_HEADER_NAME_CLIENT, REQUEST_HEADER_VALUE_CLIENT)
.post(Entity.text(QUESTION));
assertEquals(200, response.getStatus());
assertEquals(ANSWER, response.readEntity(String.class));
}
}
I want to use the ApacheConnector in order to optimize for concurrent requests via the PoolingClientConnectionManager. Did I mess up the configuration?
PS: The exact same problem occurs when using the GrizzlyConnector.
After further research, I assume that this is rather a misbehavior in the default Connector that uses a HttpURLConnection. As I explained in this other self-answered question of mine, the documentation states:
Whereas filters are primarily intended to manipulate request and
response parameters like HTTP headers, URIs and/or HTTP methods,
interceptors are intended to manipulate entities, via manipulating
entity input/output streams
A WriterInterceptor is not supposed to manipulate the header values while a {Client,Server}RequestFilter is not supposed to manipulate the entity stream. If you need to use both, both components should be bundled within a javax.ws.rs.core.Feature or within the same class that implements two interfaces. (This can be problematic if you need to set two different Prioritys though.)
All this is very unfortunate though, since JerseyTest uses the Connector that uses a HttpURLConnection such that all my unit tests succeeded while the real life application misbehaved since it was configured with an ApacheConnector. Also, rather than suppressing changes, I wished, Jersey would throw me some exceptions. (This is a general issue I have with Jersey. When I for example used a too new version of the ClientConnectionManager where the interface was renamed to HttpClientConnectionManager I simply was informed in a one line log statement that all my configuration efforts were ignored. I did not discover this log statement til very late in development.)

Resources