I understood that JMS is used to process synch messages, so what is the difference to use JMS or just take something like that?
public void synchronized doSomething(Message message) {
//do something sync
}
Thank you.
I am actually not sure what you mean by "synch messages". The key concept behind JMS is asynchronous messaging. So a sender/publisher simply calls send(Message) which is a non-blocking call. It thus does not need to wait for the receiver/consumer to finish processing.
Related
Case
Clients are ReplyingKafkaTemplate instances.
Server is a ConcurrentMessageListenerContainer created using #KafkaListener and #SendTo annotations on a method.
ContainerFactory uses ContainerStoppingErrorHandler.
Request topic has only 1 partition.
Group ids are static. eg. test-consumer-group.
Requests are sent with timeouts.
Due to an exception thrown, server goes down
but the client keeps dispatching requests which queue up on the
request topic.
Current Behavior
When the server comes back up it continues processing old requests which would have timed out.
Desired Behavior
Instead, it would be better to continue with the last message; thereby skipping past even unprocessed messages as corresponding requests would timeout and retry.
Questions
What is the recommended approach to achieve this?
From the little that I understand, it looks like I'll have to manually set the initial offset. What's the simplest way to implement this?
Your #KafkaListener class must extends AbstractConsumerSeekAware and do something like this:
#Override
public void onPartitionsAssigned(Map<TopicPartition, Long> assignments, ConsumerSeekCallback callback) {
super.onPartitionsAssigned(assignments, callback);
callback.seekToEnd(assignments.keySet());
}
So, every time when your consumer joins the group it is going to seek all the assigned partitions to the end skipping all the old records.
I'm using ktor for server side development with websockets.
Documentations shows us this example of using incoming channel:
for (frame in incoming.mapNotNull { it as? Frame.Text }) {
// some
}
But mapNotNull is marked as deprecated in favor of Flow. How should I use this API and what problems could be there? For example, the Flow is a cold stream. It means that the producer function will be called on each collect. How does it work in context of websocket. Will it be reopened on second collect call, or maybe old messages will be delivered once after the next collect? How can I collect N messages, then stop collecting, then collect again?
Thanks in advance :)
How should I use this API and what problems could be there?
What I am using and what I have seen in one of the examples somewhere in the docs is the consumeAsFlow() method called on ReceiveChannel. Here is the entire snippet:
webSocket("/websocket") { //this: DefaultWebSocketServerSession
incoming
.consumeAsFlow()
.map { receive(it) }
.collect()
}
Haven't seen major issues with this approach. One thing you should be aware of (but that goes for the non-flow approach as well) is that if you throw inside your flow, then it will break the WebSocket connection, which is usually not something you'd like to do. It might be worth considering wrapping the entire thing in a try-catch.
Will it be reopened on second collect call, or maybe old messages will be delivered once after the next collect?
You open the websocket before you even start consuming the messages from the flow. You can see that inside webSocket() {} you are in the context of DefaultWebSocketServerSession. This is your connection management. Inside your flow you are simply receiving messages one by one as they arrive (after the connection has been established). If the connection breaks, then you're out of the flow. It needs to be re-established before you can process your messages. This establishing bit is done by the Route.webSocket() method. I do recommend taking a look at its Javadoc.
If you wish to add some clean up after the connection is closed you can add a finally block like so:
webSocket("/chat") {
try {
incoming
.consumeAsFlow()
.map { receive(it, client) }
.collect()
} finally {
// cleanup
}
}
In short: collect is called once per received message. If there is no connection (or it was broken) then collect won't be called.
How can I collect N messages, then stop collecting, then collect again?
What is the use case for this? I don't think you should be doing this with any flow. You can of course take(n) items from a flow, but you won't be able to take any more from it again.
I have an injected JDBCTemplate instance, and the code basically executes
private JdbcTemplate template;
public OutputType getOutput(InputType input) {
CallType call = new CallType(input);
CallbackType callback = new CallbackType(input);
OutputType output = (OutputType) template.execute(call, callback);
...
}
I assume the execute method actually connects to the database and retrieves the result. However, I am having trouble finding out how the control flow works from the documentation.
Is the response from execute blocking (thread occupies a CPU core the entire time waiting for the database response)? Is it synchronous, but not blocking (i.e. thread sleeps/is not scheduled until the response is ready)? Is it asynchronous (execute returns immediately but output is incomplete/null, all database processing logic is in the callback)?
I have used several different databases so I am unsure of what actually happens in JdbcTemplate. If my terminology is incorrect please let me know. Thanks!
The JDBC protocol itself is synchronous and blocking - it will block on socket I/O awaiting the database response. While this doesn't mean you couldn't asynchronously call a JDBC provider (manually spawn a separate thread, use actors, etc.), the actual connection to the database will always be synchronous.
JDBCTemplate is also fully synchronous and blocking, there is no thread magic going on under the hood.
I have one actor which is executing a forever loop that is waiting for the availability of data to operate on.
The doc says the Actor runs on a very lightweight thread, so I'm not sure whether i can use the thread.sleep() method on that actor. My objective is to not have that actor consume too much processing power.
So can I use the thread.sleep() method inside the actor ?
Don't sleep() inside Actors! That would cause the Thread to be blocked, causing exactly what you're trying to avoid - using up resources.
Instead if you just handle the message and "do nothing", the Actor will not use up any scheduling resources and will be just another plain object on the heap (occupying around a bit of memory but nothing else).
I just schedule to send a "WakeUp" message in a future time. Akka will send that message at predefined time, so the actor can handle and continue processing. This is to avoid using sleep.
// schedule to wake up
getContext().getSystem().scheduler().scheduleOnce(
FiniteDuration.create(sleepTime.toMillis(), TimeUnit.MILLISECONDS),
new Runnable() {
#Override
public void run() {
getContext().getSelf().tell(new WakeUpMessage());
}
},
getContext().getSystem().executionContext());
I have got multiple instances of a class which listen to a certain event.
#Inject
#Optional
private final void doSomething(#UIEventTopic(Events.A) Object object) {
//do something
}
My question is: if I use the synchronous method IEventBroker.send, will this method reliably wait until all of the listening objects are done? My tests indicate yes, but I would just like to make sure.
The JavaDoc for IEventBroker.send says:
Publish event synchronously (the method does not return until the
event is processed).
Internally the event broker uses the OSGi EventAdmin.sendEvent method, which says:
Initiate synchronous delivery of an event. This method does not return
to the caller until delivery of the event is completed.
So, Yes synchronous delivery is guaranteed.