We are using AggregatingReplyingKafkaTemplate for sending and recieving a reply of messages.
I want to increase the timeout. But it seems the Kafka Listener is timing out in this case in around 30-40 seconds.
I have tried increasing the default timeout to 10 mins (just for testing ). But it did not help
aggregatingReplyingKafkaTemplate.setDefaultReplyTimeout(Duration.ofSeconds(600)); //changing the default timeout
Code when we are sending the message
private final AggregatingReplyingKafkaTemplate<String, T, R> aggregatingReplyingKafkaTemplate;
public RequestReplyFuture<String, T, Collection<ConsumerRecord<String, R>>> sendAndReceive(String producerTopic, T record, Headers headers) {
ProducerRecord<String, T> producerRecord = new ProducerRecord(producerTopic, record);
addHeaders(producerRecord, headers);
return this.aggregatingReplyingKafkaTemplate.sendAndReceive(producerRecord);
}
Best Regards,
Saurav
I was using DeferredResult for async request with a timeout which was causing my original HTTP request to time out. This HTTP request was internally using ReplyingTemplate to communicate with the downstream services.
I did increase the ReplyingTemplate's default timeout as i mentioned above.
So, its working now.
Related
I'm trying to send delayed messages on RabbitMQ with Spring AMQP.
I'm defining MessageProperties like this:
MessageProperties delayedMessageProperties = new MessageProperties();
delayedMessageProperties.setDelay(45000);
I'm defining the message which should be send in delay time like this:
org.springframework.amqp.core.Message amqpDelayedMessage = org.springframework.amqp.core.MessageBuilder.withBody(objectMapper.writeValueAsString(reversalMessage).getBytes())
.andProperties(reversalMessageProperties).build();
And then, If I send this message with RabbitTemplate, there is no problem. Message is being sent in defined delay time.
rabbitTemplate.convertSendAndReceiveAsType("delay-exchange",delayQueue, amqpDelayedMessage, new ParameterizedTypeReference<org.springframework.amqp.core.Message>() {
});
But I need to send this message asynchronously because I need not to block any other message in the system and to get more performance and if I use asyncRabbitTemplate, message is being delivered immediately. There is no delay.
asyncRabbitTemplate.convertSendAndReceiveAsType("delay-exchange",delayQueue, amqpDelayedMessage, new ParameterizedTypeReference<org.springframework.amqp.core.Message>() {
});
How can I obtain the delay with asnycRabbitTemplate?
This is probably a bug; please open an issue on GitHub.
The convertSendAndReceive() methods are not intended to send and receive raw Message objects.
In the case of the RabbitTemplate the conversion is skipped if the object is already a Message; there are some cases where this skip is not performed with the async template; please edit the question to show your template configuration.
However, since you are dealing with Message directly, don't use the convert... methods at all, simply use
public RabbitMessageFuture sendAndReceive(String exchange, String routingKey, Message message) {
I am using the spring WebClient to make two calls in parallel.
One of the call results is passed back as a ResponseEntity, and the other result is inspected and then disregarded. Although the transactions are both successful, I see an IllegalReferenceCountException that occurs before any of the WebClient calls actually get executed.
What I see in my logging is that the container logs the exception, then my two HTTP requests get executed successfully, and one of these responses gets returned to the client.
If the shouldBackfill() function returns false, then I execute one HTTP request and return that response (and the IllegalReferenceCountException does not occur).
I was initially thinking that I should release the reference in the second response that I disregard.
If I attempt to call releaseBody() directly on the WebClient response. (See https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/web/reactive/function/client/ClientResponse.html), this does not help. I assume now that the container is detecting that the WebClient request that I disregarded is in an illegal state, hence the error detection. But what I don't understand is that the actual request occurs AFTER the IllegalReferenceCountException gets logged.
Any ideas here on how to get around this? I am wondering if the exception is actually NOT any kind of leak.
The code looks like this:
fun execute(routeHttpRequest: RouteHttpRequest): Mono<ResponseEntity<String>> =
propertyRepository.getProperty(routeHttpRequest.propertyId.orDefault())
.flatMap {
val status = it.getOrElse { unknownStatus(routeHttpRequest.propertyId.orDefault()) }
val response1 = execute(routeHttpRequest, routingRepository.webClientFor(routeHttpRequest))
if (shouldBackfill(routeHttpRequest, status.type())) {
val response2 =
execute(routeHttpRequest, routingRepository.shadowOrBackfillWebClientFor(routeHttpRequest))
zip(response1, response2).map { response ->
compare(routeHttpRequest, response.t1, response.t2, status.type())
response.t1 // response.t2 is NOT returned here..
}
} else response1
}
// This function returns a wrapper on a spring Webclient that makes an HTTP post.
//
private fun execute(routeHttpRequest: RouteHttpRequest, client: Mono<MyWebClient>) =
client
.flatMap { dataService.execute(routeHttpRequest, it) }
.subscribeOn(Schedulers.elastic()) // TODO: consider a dedicated executor here?
private fun shouldBackfill(routeHttpRequest: RouteHttpRequest, migrationStatus: MigrationStatusType): Boolean {
... this logic returns true when we should execute 2 requests in parallel
}
Here's the exception and partial trace:
io.netty.util.IllegalReferenceCountException: refCnt: 0, decrement: 1
at io.netty.util.internal.ReferenceCountUpdater.toLiveRealRefCnt(ReferenceCountUpdater.java:74)
at io.netty.util.internal.ReferenceCountUpdater.release(ReferenceCountUpdater.java:138)
at io.netty.buffer.AbstractReferenceCountedByteBuf.release(AbstractReferenceCountedByteBuf.java:100)
at io.netty.util.ReferenceCountUtil.release(ReferenceCountUtil.java:88)
Sorry for not posting the exact code. Fix- I was passing the incoming http request org.springframework.core.io.buffer.DataBuffer directly to the WebClient request body. This was intentional because my application is acting as a proxy service. The problem came up when I attempted to make two outbound WebClient calls in parallel - the container was trying to release the underlying buffer twice, and the IllegalReferenceCountException occurs. My fix was to just copy the DataBuffer byte array into a new buffer before sending the request along to it's destination.
I use OWIN selfhost WebAPI at console application and it use HttpListener.
_owinApplication = WebApp.Start<Startup>(new StartOptions(baseUri));
How i can monitor current active connections in some interval to my application ?
Given the nature of HTTP, it's not really possible to monitor "active" connections, since that concept doesn't really exist in HTTP. A client sends a request to your server, and it either fails, or receives a response.
When you see sites report current active users, that number is usually a qualified guess, or perhaps they monitor the clients with the use of websockets, or some sort of ajax polling.
You can create your own DelegatingHandler, register it in the WebApi pipeline and monitor current connections using overridden SendAsync method:
protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
{
// TODO: add this request to some static container (like ConcurrentDictionary)
// let the request to be processed
HttpResponseMessage response;
try
{
response = await base.SendAsync(request, cancellationToken);
}
finally
{
// TODO: remove the request from the static container registered above
}
// return the response
return response;
}
This way you can not only monitor current connection count, but also all request information, like URL, IP, etc.
I'm create custom middleware and it solve my problem
everyone. I have an HTTP API for posting messages in a RabbitMQ broker and I need to implement the request-response pattern in order to receive the responses from the server. So I am something like a bridge between the clients and the server. I push the messages to the broker with specific routing-key and there is a Consumer for that messages, which is publishing back massages as response and my API must consume the response for every request. So the diagram is something like this:
So what I do is the following- For every HTTP session I create a temporary responseQueue(which is bound to the default exchange, with routing key the name of that queue), after that I set the replyTo header of the message to be the name of the response queue(where I will wait for the response) and also set the template replyQueue to that queue. Here is my code:
public void sendMessage(AbstractEvent objectToSend, final String routingKey) {
final Queue responseQueue = rabbitAdmin.declareQueue();
byte[] messageAsBytes = null;
try {
messageAsBytes = new ObjectMapper().writeValueAsBytes(objectToSend);
} catch (JsonProcessingException e) {
e.printStackTrace();
}
MessageProperties properties = new MessageProperties();
properties.setHeader("ContentType", MessageBodyFormat.JSON);
properties.setReplyTo(responseQueue.getName());
requestTemplate.setReplyQueue(responseQueue);
Message message = new Message(messageAsBytes, properties);
Message receivedMessage = (Message)requestTemplate.convertSendAndReceive(routingKey, message);
}
So what is the problem: The message is sent, after that it is consumed by the Consumer and its response is correctly sent to the right queue, but for some reason it is not taken back in the convertSendAndReceived method and after the set timeout my receivedMessage is null. So I tried to do several things- I started to inspect the spring code(by the way it's a real nightmare to do that) and saw that is I don't declare the response queue it creates a temporal for me, and the replyTo header is set to the name of the queue(the same what I do). The result was the same- the receivedMessage is still null. After that I decided to use another template which uses the default exchange, because the responseQueue is bound to that exchange:
requestTemplate.send(routingKey, message);
Message receivedMessage = receivingTemplate.receive(responseQueue.getName());
The result was the same- the responseMessage is still null.
The versions of the amqp and rabbit are respectively 1.2.1 and 1.2.0. So I am sure that I miss something, but I don't know what is it, so if someone can help me I would be extremely grateful.
1> It's strange that RabbitTemplate uses doSendAndReceiveWithFixed if you provide the requestTemplate.setReplyQueue(responseQueue). Looks like it is false in your explanation.
2> To make it worked with fixed ReplyQueue you should configure a reply ListenerContainer:
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(rabbitConnectionFactory);
container.setQueues(responseQueue);
container.setMessageListener(requestTemplate);
3> But the most important part here is around correlation. The RabbitTemplate.sendAndReceive populates correlationId message property, but the consumer side has to get deal with it, too: it's not enough just to send reply to the responseQueue, the reply message should has the same correlationId property. See here: how to send response from consumer to producer to the particular request using Spring AMQP?
BTW there is no reason to populate the Message manually: You can just simply support Jackson2JsonMessageConverter to the RabbitTemplate and it will convert your objectToSend to the JSON bytes automatically with appropriate headers.
I'm looking to increase the performance of a high-throughput producer that I'm writing against ActiveMQ, and according to this useAsyncSend will:
Forces the use of Async Sends which adds a massive performance boost;
but means that the send() method will return immediately whether the
message has been sent or not which could lead to message loss.
However I can't see it making any difference to my simple test case.
Using this very basic application:
const string QueueName = "....";
const string Uri = "....";
static readonly Stopwatch TotalRuntime = new Stopwatch();
static void Main(string[] args)
{
TotalRuntime.Start();
SendMessage();
Console.ReadLine();
}
static void SendMessage()
{
var session = CreateSession();
var destination = session.GetQueue(QueueName);
var producer = session.CreateProducer(destination);
Console.WriteLine("Ready to send 700 messages");
Console.ReadLine();
var body = new byte[600*1024];
Parallel.For(0, 700, i => SendMessage(producer, i, body, session));
}
static void SendMessage(IMessageProducer producer, int i, byte[] body, ISession session)
{
var message = session.CreateBytesMessage(body);
var sw = new Stopwatch();
sw.Start();
producer.Send(message);
sw.Stop();
Console.WriteLine("Running for {0}ms: Sent message {1} blocked for {2}ms",
TotalRuntime.ElapsedMilliseconds,
i,
sw.ElapsedMilliseconds);
}
static ISession CreateSession()
{
var connectionFactory = new ConnectionFactory(Uri)
{
AsyncSend = true,
CopyMessageOnSend = false
};
var connection = connectionFactory.CreateConnection();
connection.Start();
var session = connection.CreateSession(AcknowledgementMode.AutoAcknowledge);
return session;
}
I get the following output:
Ready to send 700 messages
Running for 2430ms: Sent message 696 blocked for 12ms
Running for 4275ms: Sent message 348 blocked for 1858ms
Running for 5106ms: Sent message 609 blocked for 2689ms
Running for 5924ms: Sent message 1 blocked for 2535ms
Running for 6749ms: Sent message 88 blocked for 1860ms
Running for 7537ms: Sent message 610 blocked for 2429ms
Running for 8340ms: Sent message 175 blocked for 2451ms
Running for 9163ms: Sent message 89 blocked for 2413ms
.....
Which shows that each message takes about 800ms to send and the call to session.Send() blocks for about two and a half seconds. Even though the documentation says that
"send() method will return immediately"
Also these number are basically the same if I either change the parallel for to a normal for loop or change the AsyncSend = true to AlwaysSyncSend = true so I don't believe that the async switch is working at all...
Can anyone see what I'm missing here to make the send asynchronous?
After further testing:
According to ANTS performance profiler that vast majority of the runtime is being spent waiting for synchronization. It appears that the issue is that the various transport classes block internally through monitors. In particular I seem to get hung up on the MutexTransport's OneWay method which only allows one thread to access it at a time.
It looks as though the call to Send will block until the previous message has completed, this explains why my output shows that the first message blocked for 12ms, while the next took 1858ms. I can have multiple transports by implementing a connection-per-message pattern which improves matters and makes the message sends work in parallel, but greatly increases the time to send a single message, and uses up so many resources that it doesn't seem like the right solution.
I've retested all of this with 1.5.6 and haven't seen any difference.
As always the best thing to do is update to the latest version (1.5.6 at the time of this writing). A send can block if the broker has producer flow control enabled and you've reached a queue size limit although with async send this shouldn't happen unless you are sending with a producerWindowSize set. One good way to get help is to create a test case and submit it via a Jira issue to the NMS.ActiveMQ site so that we can look into it using your test code. There have been many fixes since 1.5.1 so I'd recommend giving that new version a try as it could already be a non-issue.