Programatically bridge a QueueChannel to a MessageChannel in Spring - spring

I'm attempting to wire a queue to the front of a MessageChannel, and I need to do so programatically so it can be done at run time in response to an osgi:listener being triggered. So far I've got:
public void addService(MessageChannel mc, Map<String,Object> properties)
{
//Create the queue and the QueueChannel
BlockingQueue<Message<?>> q = new LinkedBlockingQueue<Message<?>>();
QueueChannel qc = new QueueChannel(q);
//Create the Bridge and set the output to the input parameter channel
BridgeHandler b = new BridgeHandler();
b.setOutputChannel(mc);
//Presumably, I need something here to poll the QueueChannel
//and drop it onto the bridge. This is where I get lost
}
Looking through the various relevant classes, I came up with:
PollerMetadata pm = new PollerMetadata();
pm.setTrigger(new IntervalTrigger(10));
PollingConsumer pc = new PollingConsumer(qc, b);
but I'm not able to put it all together. What am I missing?

So, the solution that ended up working for me was:
public void addEngineService(MessageChannel mc, Map<String,Object> properties)
{
//Create the queue and the QueueChannel
BlockingQueue<Message<?>> q = new LinkedBlockingQueue<Message<?>>();
QueueChannel qc = new QueueChannel(q);
//Create the Bridge and set the output to the input parameter channel
BridgeHandler b = new BridgeHandler();
b.setOutputChannel(mc);
//Setup a Polling Consumer to poll the queue channel and
//retrieve 1 thing at a time
PollingConsumer pc = new PollingConsumer(qc, b);
pc.setMaxMessagesPerPoll(1);
//Now use an interval trigger to poll every 10 ms and attach it
IntervalTrigger trig = new IntervalTrigger(10, TimeUnit.MILLISECONDS);
trig.setInitialDelay(0);
trig.setFixedRate(true);
pc.setTrigger(trig);
//Now set a task scheduler and start it
pc.setTaskScheduler(taskSched);
pc.setAutoStartup(true);
pc.start();
}
I'm not terribly clear if all the above is explicitly needed, but neither the trigger or the task scheduler alone worked, I did appear to need both. I should also note the taskSched used was the default taskScheduler dependency injected from spring via
<property name="taskSched" ref="taskScheduler"/>

Related

Sping Boot Service consume kafka messages on demand

I have requirement where need to have a Spring Boot Rest Service that a client application will call every 30 minutes and service is to return
number of latest messages based on the number specified in query param e.g. http://messages.com/getNewMessages?number=10 in this case should return 10 messages
number of messages based on the number and offset specified in query param e.g. http://messages.com/getSpecificMessages?number=5&start=123 in this case should return 5 messages starting offset 123.
I have simple standalone application and it works fine. Here is what I tested and would lke some direction of incorporating it in the service.
public static void main(String[] args) {
// create kafka consumer
Properties properties = new Properties();
properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
properties.put(ConsumerConfig.GROUP_ID_CONFIG, "my-first-consumer-group");
properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
properties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
properties.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, args[0]);
Consumer<String, String> consumer = new KafkaConsumer<>(properties);
// subscribe to topic
consumer.subscribe(Collections.singleton("test"));
consumer.poll(0);
//get to specific offset and get specified number of messages
for (TopicPartition partition : consumer.assignment())
consumer.seek(partition, args[1]);
ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(5000));
System.out.println("Total Record Count ******* : " + records.count());
for (ConsumerRecord<String, String> record : records) {
System.out.println("Message: " + record.value());
System.out.println("Message offset: " + record.offset());
System.out.println("Message: " + record.timestamp());
Date date = new Date(record.timestamp());
Format format = new SimpleDateFormat("yyyy MM dd HH:mm:ss.SSS");
System.out.println("Message date: " + format.format(date));
}
consumer.commitSync();
As my consumer will be on-demand wondering in Spring Boot Service how I can achieve this. Where do I specify the properties if I put in application.properties those get's injected at startup time but how do i control MAX_POLL_RECORDS_CONFIG at runtime. Any help appreciated.
MAX_POLL_RECORDS_CONFIG only impact your kafka-client return the records to your spring service, it will never reduce the bytes that the consumer poll from kafka-server
see the above picture, no matter your start offset = 150 or 190, kafka server will return the whole data from (offset=110, offset=190), kafka server even didn't know how many records return to consumer, he only know the byte size = (220 - 110)
so i think you can control the record number by yourself,currently it is controlled by the kafka client jar, they are both occupy your jvm local memory
The answer to your question is here and the answer with code example is this answer.
Both written by the excellent Gary Russell, the main or one of the main person behind Spring Kafka.
TL;DR:
If you want to arbitrarily rewind the partitions at runtime, have your
listener implement ConsumerSeekAware and grab a reference to the
ConsumerSeekCallback.

How to consume message from RabbitMQ dead letter queue one by one

The requirement is like to process the messages from dead letter queue by exposed a REST service API(Spring Boot).
So that once REST service is called, one message will be consumed from the DL queue and will publish in the main queue again for processing.
#RabbitListener(queues = "QUEUE_NAME") consumes the message immediately which is not required as per the scenario. The message only has to be consumed by the REST service API.
Any suggestion or solution?
I do not think RabbitListener will help here.
However you could implement this behaviour manually.
Spring Boot automatically creates RabbitMq connection factory so you could use it. When http call is made just read single message from the queue manually, you could use basic.get to synchronously get just one message:
#Autowire
private ConnectionFactory factory
void readSingleMessage() {
Connection connection = null;
Channel channel = null;
try {
connection = factory.newConnection();
channel = connection.createChannel();
channel.queueDeclare(QUEUE_NAME, true, false, false, null);
GetResponse response = channel.basicGet(QUEUE_NAME, true);
if (response != null) {
//Do something with the message
}
} finally {
//Check if not null
channel.close();
connection.close();
}
}
If you are using Spring; you can avoid all the boilerplate in the other answer using RabbitTemplate.receive(...).
EDIT
To manually ack/reject the message, use the execute method instead.
template.execute(channel -> {
GetResponse got = channel.basicGet("foo", false);
// ...
channel.basicAck(got.getEnvelope().getDeliveryTag(), false);
return null;
});
It's a bit lower level, but again, most of the boilerplate is taken care of for you.

Connection time out in jpos client

I am using jpos client (In one of the class of java Spring MVC Program) to connect the ISO8585 based server, however due to some reason server is not able to respond back, due to which my program keeps waiting for the response and results in hanging my program. So what is the proper way to implement connection timeout?
My client program look like this:
public FieldsModal sendFundTransfer(FieldsModal field){
try {
JposLogger logger = new JposLogger(ISO_LOG_LOCATION);
org.jpos.iso.ISOPackager customPackager = new GenericPackager(ISO_PACKAGER);
ISOChannel channel = new PostChannel(ISO_SERVER_IP, Integer.parseInt(ISO_SERVER_PORT), customPackager);// live
logger.jposlogconfig(channel);
channel.connect();
log4j.info("Connection established using PostChannel");
ISOMsg m = new ISOMsg();
m.set(0, field.getMti());
//m.set(2, field.getField2());
m.set(3, field.getField3());
m.set(4, field.getField4());
m.set(11, field.getField11());
m.set(12, field.getField12());
m.set(17, field.getField17());
m.set(24, field.getField24());
m.set(32, field.getField32());
m.set(34, field.getField34());
m.set(41, field.getField41());
m.set(43, field.getField43());
m.set(46, field.getField46());
m.set(49, field.getField49());
m.set(102,field.getField102());
m.set(103,field.getField103());
m.set(123, field.getField123());
m.set(125, field.getField125());
m.set(126, field.getField126());
m.set(127, field.getField127());
m.setPackager(customPackager);
System.out.println(ISOUtil.hexdump(m.pack()));
channel.send(m);
log4j.info("Message has been send");
ISOMsg r = channel.receive();
r.setPackager(customPackager);
System.out.println(ISOUtil.hexdump(r.pack()));
channel.disconnect();
}catch (Exception err) {
System.out.println("sendFundTransfer : " + err);
}
return field;
}
Well the real proper way would be to use Q2. Given you don't need a persistent connection you coud just set a timeout for the channel.
PostChannel channel = new PostChannel(ISO_SERVER_IP, Integer.parseInt(ISO_SERVER_PORT), customPackager);// live
channel.setTimeout(timeout); //timeout in millies.
This way channel will autodisconnect if nothing happens during the time specified by timeout , and your call to receive will throw an exception.
The alternative is using Q2 and a mux (see QMUX, for which you need to run Q2, or ISOMUX which is kind of deprecated).

Replay a particular type of event from eventstore

I am currently using the Event Store to handle my events. I currently need to replay a particular type of event as I have made changes in the way they are subscribed and written to DB.
Is this possible? If so, how can it be done? Thanks.
You cannot tell EventStore to replay a specific event onto a persistent subscription because the point of the persistent subscription is to keep state for the subscribers.
To achieve this kind of fix you would really need a catch up application to do the work.
And really if you think about, if you replayed ALL the events to a new database then you would have the correct data in there?
So I have a console application that reuses the same logic as the persistent connection but the only difference is:
I change the target database connection string - So this would be a new Database or Collection (not the broken one)
It connects to EventStore and replays all the events from the start
It rebuilds the entire database to the correct state
Switch the business over to the new database
This is the point of EventStore - You just replay all the events to build any database at any time and it will be correct
Your persistent connections deal with new, incoming events and apply updates.
If you enable $by_event_type projection than you can access that projection stream under
/streams/$et-{event-type}
https://eventstore.org/docs/projections/system-projections/index.html
Then you can read it using .net api if you wish.
Here is some code to get you started
private static T GetInstanceOfEvent<T>(ResolvedEvent resolvedEvent) where T : BaseEvent
{
var metadataString = Encoding.UTF8.GetString(resolvedEvent.Event.Metadata);
var eventClrTypeName = JObject.Parse(metadataString).Property(EventClrTypeHeader).Value;
var #event = JsonConvert.DeserializeObject(Encoding.UTF8.GetString(resolvedEvent.Event.Data), Type.GetType((string) eventClrTypeName));
if (!(#event is BaseEvent))
{
throw new MessageDeserializationException((string) eventClrTypeName, metadataString);
}
return #event as T;
}
private static IEventStoreConnection GetEventStoreConnection()
{
var connectionString = System.Configuration.ConfigurationManager.ConnectionStrings["EventStore"].ConnectionString;
var connection = EventStoreConnection.Create(connectionString);
connection.ConnectAsync().Wait();
return connection;
}
private static string GetStreamName<T>() where T : BaseEvent
{
return "$et-" + typeof(T).Name;
}
And to read events you can use this code snippet
StreamEventsSlice currentSlice;
long nextSliceStart = StreamPosition.Start;
const int sliceCount = 200;
do
{
currentSlice = await esConnection.ReadStreamEventsForwardAsync(streamName, nextSliceStart, sliceCount, true);
foreach (var #event in currentSlice.Events)
{
var myEvent = GetInstanceOfEvent<OrderMerchantFeesCalculatedEvent>(#event);
TransformEvent(myEvent);
}
nextSliceStart = currentSlice.NextEventNumber;
} while (currentSlice.IsEndOfStream == false);

How comes my channel.basicConsume does not wait for messages

Whenever I start the following code:
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
String exchangeName = "direct_logs";
channel.exchangeDeclare(exchangeName, "direct");
String queueName = channel.queueDeclare().getQueue();
channel.queueBind(queueName, exchangeName, "red");
channel.basicQos(1);
final Consumer consumer = new DefaultConsumer(channel){
#Override
public void handleDelivery(String consumerTag,
Envelope envelope,
AMQP.BasicProperties properties,
byte[] body) throws IOException{
String message = new String(body, "UTF-8");
System.out.println(message);
System.out.println("message received");
}
};
channel.basicConsume(queueName, true, consumer);
It does not start an endless loop, as is implied in the documentation. Instead, it stops right away.
The only way I can have it consume for some time is to replace channel.basicConsume with a loop, as follows:
DateTime startedAt = new DateTime();
DateTime stopAt = startedAt.plusSeconds(60);
long i=0;
try {
while (stopAt.compareTo(new DateTime()) > 0) {
channel.basicConsume(queueName, true, consumer);
i++;
}
}finally {
System.out.println(new DateTime());
System.out.println(startedAt);
System.out.println(stopAt);
System.out.println(i);
}
There must be a better way to listen to messages for a while, correct? What am I missing?
It stops listening right away.
Are you sure it's stopping? What basicConsume does is register a consumer to listen to a specific queue so there is no need to execute it in a loop. You only execute it once, and the handleDelivery method of the instance of Consumer you pass will be called whenever a message arrives.
The Threads that the rabbitmq library creates should keep the JVM from exiting. In order to exit the program you should actually call connection.close()
Here is a complete receiver example from rabbitmq: https://github.com/rabbitmq/rabbitmq-tutorials/blob/master/java/Recv.java
It's actually pretty much the same as yours.
i had the same issue. the reason was that i was calling connection.close at the end. however, the basicConsume() method does not block on the current thread, rather on other threads, so the code after it, i.e. the connection.close() is called immediately.

Resources