How to validate/display a message If Queue connected - jms

I was able to connect to IBM MQ from local & listen, do processing.
Once I deploy to open shift ( ports are open though), I am not seeing it is processing messages by going into #JmsListener.
Is there a way we can check/display a message once connected to Queue.
What might be wrong in my case.
#Component
public class SampleMessageReceiver {
#Autowired private RestTemplate restTemplate;
#Autowired private UrlsConfig urlsConfig;
#JmsListener(
destination = "${ibm.mq.channel}",
containerFactory = "myListenerContainerFactory",
selector = "JMSCorrelationID='c9d5e2d7c5c3e3c9d6d54040404040404040404040404040'")
public void processSampleMessage(#Valid SampleMessage sampleMessage) {
System.out.println("~~~~~~~~~~~~~~~~~~~ In process SampleMessages ~~~~~~~~~~~~~~~~~~~\n\n");
}
}

Related

Kafka producer is not recreated with #RefreshScope

I have a config server and several clients that work with Kafka. I would like my clients to work non-stop when I update config properties on a config server.
Here is my configuration setup:
#Component
#RefreshScope
#ConfigurationProperties(prefix = "params")
public class ConfigParams {
private KafkaParams kafkaParams = new KafkaParams(); // some custom params (doesn't matter)
// ...
}
#Configuration
public class KafkaConfig {
#Autowired
private ConfigParams configParams;
// ...
#Bean
#RefreshScope
MyKafkaBean myKafkaBean() {
return new MyKafkaBean(configParams.getKafkaParams()); // here I create my own kafka bean with topics, producers and consumers
}
// ...
}
Problem description:
If I change bootstrap.servers property to an incorrect and the kafka producer was invoked, there will be a WARN with a message that producer cannot connect to a kafka broker (that is correct). When I fix this property the connection is back and everything works fine, but I still have WARNs about no connection to the broker for "producer-1" ("producer-2" was created for a new connection).
P.S. Consumers work fine without any issues.
P.S.P.S. I've already tried to delete producer manually using DefaultKafkaConsumerFactory#reset or DefaultKafkaConsumerFactory#destroy, but it didn't help.
P.S.P.S.P.S. I assume the problem is that Kafka saves cache of producers. But I don't know how to solve it.
I'll be grateful for your help.
I had a similar issue. I used the RefreshRemoteApplicationEvent listener to manually close the producer before the refresh mechanism is processed.
For example:
#Slf4j
public class MyRefreshEventListener implements ApplicationListener<RefreshRemoteApplicationEvent> {
private final MyKafkaBean myKafkaBean;
public MyRefreshEventListener(#Autowired MyKafkaBean myKafkaBean) {
this.myKafkaBean = myKafkaBean;
}
#Override
public void onApplicationEvent(RefreshRemoteApplicationEvent event) {
log.info("Refresh myKafkaBean:");
if (myKafkaBean != null) {
closeProducer(myKafkaBean);
}
}
private void closeProducer(Producer<String, String> producer) {
if (producer != null) {
producer.flush();
producer.close();
}
}
}

how to consume events from kafka by a Spring Rest endpoint

I'm new to Kafka. I've seen that the consumer is "always running" and retrieves messages from a topic as soon as been published.
In a typical database web application you have a rest API that connects to DB and returns some response.
From what I see the consumer stays active and never close.
So I don't figure out how to return a subset of messages from a topic based on client request.
I thought the service would create a consumer to get what I need, but as far as consumer never close, I guess my opinion is not correct.
What should I do?
Then it's a simple question of persisting messages rceived thru KafkaListener, let's say adding each of them to a simple collecton (along with its timestamp) and implementing an endpoint to filter the messages accordingly and returning some of them.
#Controller
public class KafkaController {
#Autowired
private KafkaProducerConfig kafkaProducerConfig;
private Map<Date, String> msgMap = new HashMap();
#KafkaListener(topics = "myTopic", groupId = "myGroup")
public void listenAndAddMsg(String message) {
msgMap.put(new Date(), message);
}
#PostMapping("messages")
#ResponseBody
public String filterMessages(#RequestBody Interval interval) {
return msgMap.entrySet()
.stream()
.filter(map -> map.getKey().after(interval.getStartDate()) && map.getKey().before(interval.getEndDate()))
.collect(Collectors.toMap(map -> map.getKey(), map -> map.getValue()));
}
}
public class Interval {
private Date startDate;
private Date endDate;
// setters and getters
}

Spring boot rabbitmq no exchange '"xxxxxxx"' in vhost '/'

I'm writing a simple rabbitmq producer with spring boot 2.2.7.
On the broker side I've setup a direct exchange samples , a queue named samples.default and binded them together adding a samples.default bindkey key.
when running the application I get the following error
Attempting to connect to: [127.0.0.1:5672]
2020-05-14 15:13:39.232 INFO 28393 --- [nio-8080-exec-1] o.s.a.r.c.CachingConnectionFactory : Created new connection: rabbitConnectionFactory#2f860823:0/SimpleConnection#3946e760 [delegate=amqp://open-si#127.0.0.1:5672/, localPort= 34710]
2020-05-14 15:13:39.267 ERROR 28393 --- [ 127.0.0.1:5672] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no exchange '"samples"' in vhost '/', class-id=60, method-id=40)
The rabbitmq server configuration is correct as I've a python producer that already puts messages succesfully in the "samples.default" queue.
in Spring boot I'm using jackson serialization, but that's not the prolem here I think as I've tested the code without the Jakson serialization configuration and the problem is still the same.
My broker configuration is set both in the application.properties :
#spring.rabbitmq.host=localhost
spring.rabbitmq.addresses=127.0.0.1
spring.rabbitmq.port=5672
spring.rabbitmq.username=xxxx
spring.rabbitmq.password=xxxx
broker.exchange = "samples"
broker.routingKey = "samples.default"
note that using spring.rabbitmq.host doesn't work as it results in using my internet provider address !
and in a BrokerConf configuration class :
#Configuration
public class BrokerConf {
#Bean("publisher")
MessagePublisher<BaseSample> baseSamplePublisher(RabbitTemplate rabbitTemplate) {
return new MessagePublisher<BaseSample>(rabbitTemplate);
}
#Bean
public RabbitTemplate rabbitTemplate(final ConnectionFactory connectionFactory) {
final var rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setMessageConverter(producerJackson2MessageConverter());
return rabbitTemplate;
}
#Bean
public MessageConverter producerJackson2MessageConverter() {
return new Jackson2JsonMessageConverter();
}
}
The publisher base class is as :
#Component
public class MessagePublisher<T> {
private static final Logger log = LoggerFactory.getLogger(MessagePublisher.class);
private final RabbitTemplate rabbitTemplate;
public MessagePublisher(RabbitTemplate r) {
rabbitTemplate = r;
}
public void publish(List<BaseSample> messages, String exchange, String routingKey) {
for (BaseSample message: messages) {
rabbitTemplate.convertAndSend(exchange, routingKey, message);
}
}
}
that I use in a rest controller
private static final Logger logger = LoggerFactory.getLogger(SamplesController.class);
#Autowired
private MessagePublisher<BaseSample> publisher;
#Value("${broker.exchange}")
private String exchange;
#Value("${broker.routingKey}")
private String routingKey;
#PutMapping(value = "/new", produces = MediaType.APPLICATION_JSON_VALUE)
public ResponseEntity<SampleAck> add(#RequestBody List<BaseSample> samples) {
publisher.publish(samples, exchange, routingKey);
return ResponseEntity.ok(new SampleAck(samples.size(), new Date()));
}
So the broker connection is OK but the exchange is not found
and rabbitmq resources exists
xxxxxx#xxxxxxx:~/factory/udc-collector$ sudo rabbitmqctl list_exchanges
Listing exchanges for vhost / ...
name type
amq.topic topic
amq.rabbitmq.trace topic
amq.match headers
amq.direct direct
amq.fanout fanout
direct
amq.rabbitmq.log topic
amq.headers headers
samples direct
xxxx#xxxxx:~/factory/udc-collector$ sudo rabbitmqctl list_queues
Timeout: 60.0 seconds ...
Listing queues for vhost / ...
name messages
samples.default 2
Any idea ?
thanks in advance.
The error seems quite obvious:
no exchange '"samples"' in vhost
broker.exchange = "samples"
broker.routingKey = "samples.default"
Remove the quotes
broker.exchange=samples
broker.routingKey=samples.default

How do I throttle the amount of data sent to Stomp queue (handling websockets) so that I can guarantee that I don't overflow the buffer?

I have two Java processes and I am connecting them using a websocket in spring boot. One process acts as the client and connects like this:
List<Transport> transports = new ArrayList<Transport>(1);
transports.add(new WebSocketTransport(new StandardWebSocketClient()));
WebSocketClient client = new SockJsClient(transports);
WebSocketStompClient stompClient = new WebSocketStompClient(client);
stompClient.setMessageConverter(new MappingJackson2MessageConverter());
StompSessionHandler firstSessionHandler = new MyStompSessionHandler("Philip");
stompClient.connect("ws://localhost:8080/chat", firstSessionHandler);
The session handler extends StompSessionHandlerAdapter and provides these methods (I am subscribing by username so each client can receive its own messages):
#Override
public void afterConnected(
StompSession session, StompHeaders connectedHeaders) {
session.subscribe("/user/" + userName + "/reply", this);
session.send("/app/chat", getSampleMessage());
}
#Override
public void handleFrame(StompHeaders headers, Object payload) {
Message msg = (Message) payload;
// etc.....
}
On the server side I have a Controller exposed and I am writing data by calling the endpoint from a worker thread.
#Autowired
private SimpMessagingTemplate template;
#MessageMapping("/chat")
public void send(
Message message)
throws Exception {
template.convertAndSendToUser(message.getFrom(),
"/reply",
message);
}
In the websocket config I am overriding the method to set the limits:
#Configuration
#EnableWebSocketMessageBroker
public class WebSocketConfig extends AbstractWebSocketMessageBrokerConfigurer {
#Override
public void configureMessageBroker(MessageBrokerRegistry config) {
config.enableSimpleBroker("/topic", "/user");
config.setApplicationDestinationPrefixes("/app");
}
#Override
public void configureWebSocketTransport(WebSocketTransportRegistration registration) {
registration.setMessageSizeLimit(500 * 1024);
registration.setSendBufferSizeLimit(1024 * 1024);
registration.setSendTimeLimit(20000);
}
My question is this, if the load on the server gets high enough and I overrun the limit, the websocket fails catastrophically, and I want to avoid this. What I would like to do is for the controller to have the ability to ask the message broker "will this message fit in the buffer?", so that I can throttle to stay under the limit. I searched the API documentation but I don't see any way of doing that. Are there any other obvious solutions that I am missing?
Thanks.
Actually I found a solution, so if anyone is interested, here it is.
On the server side configuration of the websockets I installed an Interceptor on the Outbound Channel (this is part of the API), which is called after each send from the embedded broker.
So I know how much is coming in, which I keep track of in my Controller class and I know how much is going out through the Interceptor that I installed, and this allows me to always stay under the limit.
The controller, before accepting any new messages to be queued up for the broker first determines if enough room is available and if not queues up the message in external storage until such time as room becomes available.

JMS / ActiveMQ: Sending an object with objects as class members

I'm using ActiveMQ (with Spring) for sending messages to remote OSGi-Container.
This works very fine, but there is one issue.
I got two classes implementing Serializable. One class is the class member of the other class, like this:
public class Member implements Serializble {
private int someValue;
private static final long serialVersionUID = -4329617004242031635L;
... }
public class Parent implements Serializable {
private static final long serialVersionUID = -667242031635L;
private double otherValue;
private Member;
}
So, when is send a Parent instance, the Member of the Parent is null.
Hope you understand what my problem is :)
Edit: funny issue: I got a java.util.date in my class which is serialized correctly, but this is the only thing, all Doubles etc are null
If Objects are an option, you might go for something like this
Producer side:
SomeObject someObject = new SomeObject();
ObjectMessage objectMessage = session.createObjectMessage();
objectMessage.setObject(someObject);
producer.send(objectMessage);
Consumer side:
private class MessageConsumer implements MessageListener {
#Override
public void onMessage(Message message) {
logger.debug("onMessage() " + message);
if (message instanceof ObjectMessage) {
ObjectMessage objectMessage = (ObjectMessage) message;
SomeObject someObject = (SomeObject)objectMessage.getObject();
}
}
}
Serialized objects in byte messages are a bit hard to deal with.
I would go with object messages, as Aksel Willgert suggested, or simply take it to some more loosley coupled format, such as serialzied XML. I quick solution would be to use XStream to go to/from XML in a more loosely coupled fashion, a quick guide here: XStream
Update, and some code here (need to add xstream-.jar to your project)
// for all, instanciate XStream
XStream xstream = new XStream(new StaxDriver());
// Producer side:
TextMessage message = session.createTextMessage(xstream.toXML(mp));
producer.send(message);
// consumer side:
TextMessage tmsg = (TextMessage)msg;
Parent par = (Parent)xstream.fromXML(tmsg.getText());
par.getMember() // etc should work just fine.

Resources