Rocketmq:MQBrokerException: CODE: 2 DESC: [TIMEOUT_CLEAN_QUEUE] - rocketmq

when i send message to broker,this exception occasionally occurs.
MQBrokerException: CODE: 2 DESC: [TIMEOUT_CLEAN_QUEUE]broker busy, start flow control for a while
This means broker is too busy(when tps>1,5000) to handle so many sending message request.
What would be the most impossible reason to cause this? Disk ,cpu or other things? How can i fix it?

There are many possible ways.
The root cause is that, there are some messages has waited for long time and no worker thread processes them, rocketmq will trigger the fast failure.
So the below is the cause:
Too many thread are working and they are working very slow to process storing message which makes the cache request is timeout.
The jobs it self cost a long time to process for message storing.
This may be because of:
2.1 Storing message is busy, especially when SYNC_FLUSH is used.
2.2 Syncing message to slave takes long when SYNC_MASTER is used.

In
/broker/src/main/java/org/apache/rocketmq/broker/latency/BrokerFastFailure.java you can see:
final long behind = System.currentTimeMillis() - rt.getCreateTimestamp();
if (behind >= this.brokerController.getBrokerConfig().getWaitTimeMillsInSendQueue()) {
if (this.brokerController.getSendThreadPoolQueue().remove(runnable)) {
rt.setStopRun(true);
rt.returnResponse(RemotingSysResponseCode.SYSTEM_BUSY, String.format("[TIMEOUT_CLEAN_QUEUE]broker busy, start flow control for a while, period in queue: %sms, size of queue: %d", behind, this.brokerController.getSendThreadPoolQueue().size()));
}
}
In common/src/main/java/org/apache/rocketmq/common/BrokerConfig.java, getWaitTimeMillsInSendQueue() method returns
public long getWaitTimeMillsInSendQueue() {
return waitTimeMillsInSendQueue;
}
The default value of waitTimeMillsInSendQueue is 200, thus you can just set it bigger to make the queue waiting for longer time. But if you wanna solve the problem completely, you should follow Jaskey's advice and check your code.

Related

How to send an exception to Sentry from Laravel Job only on final fail?

Configuration
I'm using Laravel 8 with sentry/sentry-laravel plugin.
There is a Job that works just fine 99% of time. It retries N times in case of any problems due to:
public $backoff = 120;
public function retryUntil()
{
return now()->addHours(6);
}
And it simply calls some service:
public function handle()
{
// Service calls some external API
$service->doSomeWork(...);
}
Method doSomeWork sometimes throws an exception due to network problems, like Curl error: Operation timed out after 15001 milliseconds with 0 bytes received. This is fine due to automatic retries. In most cases next retry will succeed.
Problem
Every curl error is sent to Sentry. As an administrator I must check every alert, because this job is pretty important and I can't miss actually failed job. For example:
There is some network problem that is not resolved for an hour.
Application queues a Job
Every 2 minutes application generates similar message to Sentry
After network problems resolved job succeeds, so no attention required
But we are seing dozens of errors, that theoretically could be ignored. But what if there an actual problem in that pile and I will miss it?
Question
How to make that only "final" job fail would send a message to Sentry? I mean after 6 hours of failed retries: only then I'd like to receive one alert.
What I tried
There is one workaround that kind of "works". We can replace Exception with SomeCustomException and add it to \App\Exceptions\Handler::$dontReport array. In that case there are no "intermediate" messages sent to Sentry.
But when job finally fails, Laravel sends standard ... job has been attempted too many times or run too long message without details of actual error.

Azure Java ServiceBus Queue Java receive or complete messages in batch for better performance

I am trying to receive messages from a queue and had experimented with different methods and was facing performance issues. Below are the metrics for each type of run:
Receive Mode = peek and lock; 1000 messages took 2.5 minutes as I had to complete each message one by one
Receive Mode = receive and delete; 1000 messages took an average of 1.5 mins.
Receive Mode = receive and delete (with prefetch count as 100); 1000 messages took 3 seconds but I ended up losing the 100 messages which were in the buffer at the time of execution end
Receive Mode = peek and lock (with prefetch count as 100); 1000 message took 2 minutes as I had to again complete each message. It would have been a problem solver only if there was a way to complete them in batch.
Below is my code for reference:
ServiceBusSessionReceiverClient sessionReceiverClient = new ServiceBusClientBuilder()
.connectionString(System.getenv("QueueConnectionString"))
.sessionReceiver()
.maxAutoLockRenewDuration(Duration.ofMinutes(2))
.receiveMode(ServiceBusReceiveMode.PEEK_LOCK)
.queueName(queueName)
.buildClient();
ServiceBusReceiverClient receiverClient = sessionReceiverClient.acceptSession(System.getenv("QueueSessionName"));
ObjectMapper objectMapper = new ObjectMapper();
do {
receiverClient.receiveMessages((int) prefetchCount).stream().forEach(message -> {
try {
String str = message.getBody().toString();
final T dataDto = objectMapper.readValue(message.getBody().toString(), returnType);
dataDtoList.add(dataDto);
receiverClient.complete(message);
} catch (Exception e) {
AzFaUtil.getLogger().severe("Message processing failed. Error: " + e.getMessage() + e + "\n Payload: "
+ message);
}
});
} while (dataDtoList.size() < numberOfMessages);
receiverClient.close();
sessionReceiverClient.close();
Possible solutions that I can think of:
If there is a way to complete messages in batch instead of completing 1 by 1.
If there is a way to requeue the messages back to the queue which are sitting in the prefetch buffer.
Note: This API needs to be Synchronous. I just experimented with 1000 entries but I am working with 30000 entries so performance matters. Also the queue is session enabled and also partition enabled
As per this issue, Microsoft has yet to test the performance of their ServiceBus. As FIFO (First In First Out) was a requirement for my message queue, I used JMS and it performed almost 10x faster on average but there is a drawback. Currently, JMS doesn't support session-based queues so I had to disable sessions, and then to ensure FIFO I also had to disable partitioning on the queue. This is a partial and temporary solution for better performance till either Microsoft improves the performance of their ServiceBusRecieverClient or enables sessions on JMS.

zeromq: ZMQ_CONFLATE==1 does not stop queues from saving old messages

With ZeroMQ and CPPZMQ 4.3.2, I want to drop old messages for all my sockets including
PAIR
Pub/Sub
REQ/REP
So I use m_socks[channel].setsockopt(ZMQ_CONFLATE, 1) on all my sockets before binding/connecting.
Test
However, when I made the following test, it seems that the old messages are still flushed out on each reconnection. In this test,
I use a thread to keep sending generated sinewave to a receiver thread
Every 10 seconds I double the sinewave's frequency
Then after 10 seconds I stop the process
Below is the pseudocode of the sender
// on sender end
auto thenSec = high_resolution_clock::now();
while(m_isRunning) {
// generate sinewave, double the frequency every 10s or so
auto nowSec = high_resolution_clock::now();
if (duration_cast<seconds>(nowSec - thenSec).count() > 10) {
m_sine.SetFreq(m_sine.GetFreq()*2);
thenSec = nowSec;
}
m_sine.Generate(audio);
// send to rendering thread
m_messenger.send("inproc://sound-ear.pair",
(const void*)(audio),
audio_size,
zmq::send_flags::dontwait
);
}
Note that I already use DONTWAIT to mitigate blocking.
On the receiver side I have a zmq::poller_event handler that simply receives the last message on event polling.
In the stop sequence I reset the sinewave frequency to its lowest value, say, 440Hz.
Expected
The expected behaviour would be:
If I stop both the sender and the receiver after 10s when the frequency is doubled,
and I restart both,
then I should see the sinewave reset to 440Hz.
Observed
But the observed behaviour is that the received sinewave is still of the doubled frequency after restarting the communication, i.e., 880Hz.
Question
Am I doing it wrong or should I use some kind of killswitch to force drop all messages in this case?
OK, I think I solved it myself. Kind of.
Actual solution
I finally realized that the behaviour I want is to flush all messages when I stop the rendering. According to the official doc(How can I flush all messages that are in the ZeroMQ socket queue?), this can only be achieved by
set the sockets of both sender's and receiver's ZMQ_LINGER option to 0, meaning to keep nothing on closing those sockets;
closing the sockets on both sender and receiver ends, which also involves bootstrapping pollers and all references to the sockets.
This seems a lot of unnecessary work if I'm to restart rendering my data again, right after the stop sequence. But I found no other way to solve this cleanly.
Initial effort
It seems to me that ZMQ_CONFLATE does not make a difference on PAIR. I really have to tweak high water marks on sender and receiver ends using ZMQ_SNDHWM and ZMQ_RCVHWM.
However, I said "kind of solved" because tweaking HWM in the end is not the optimal solution for a realtime application,
having ZMQ_SNDHWM / ZMQ_RCVHWM set to the minimum "1", we still have a sizable latency in terms of realtime.
Also, the consumer thread could fall into underrun situatioin, i.e., perceivable jitters with the lowest HWM.
If I'm not doing anything wrong, I guess the optimal solution would still be shared memory for my targeted scenario. This is sad because I really enjoyed the simplicity of ZMQ's multicast messaging patternsand hate to deal with thread locking littered everywhere.

Does an OMNET++ / Veins simulation get very slow if both Vehicles and RSUs broadcast messages periodically?

Let me give a brief context first:
I have a scenario where the RSUs will broadcast a fixed message 'RSUmessage' about every TRSU seconds. I have implemented the following code for RSU broadcast (also, these fixed messages have Psid = -100 to differentiate them from others):
void TraCIDemoRSU11p::handleSelfMsg(cMessage* msg) {
if (WaveShortMessage* wsm = dynamic_cast<WaveShortMessage*>(msg)) {
if(wsm->getPsid()==-100){
sendDown(RSUmessage->dup());
scheduleAt(simTime() + trsu +uniform(0.02, 0.05), RSUmessage);
}
}
else {
BaseWaveApplLayer::handleSelfMsg(wsm);
}
}
A car can receive these messages from other cars as well as RSUs. RSUs discard the messages received from cars. The cars will receive multiple such messages, do some comparison stuff and periodically broadcast a similar type of message : 'aggregatedMessage' per interval Tcar. aggregatedMessage also have Psid=-100 ,so that the message can be differentiated from other messages easily.
I am scheduling the car events using self messages. (Though it could have been done inside handlePositionUpdate I believe). The handleSelfMsg of a car is following:
void TraCIDemo11p::handleSelfMsg(cMessage* msg) {
if (WaveShortMessage* wsm = dynamic_cast<WaveShortMessage*>(msg)) {
wsm->setSerial(wsm->getSerial() +1);
if (wsm->getPsid() == -100) {
sendDown(aggregatedMessage->dup());
//sendDelayedDown(aggregatedMessage->dup(), simTime()+uniform(0.1,0.5));
scheduleAt(simTime()+tcar+uniform(0.01, 0.05), aggregatedMessage);
}
//send this message on the service channel until the counter is 3 or higher.
//this code only runs when channel switching is enabled
else if (wsm->getSerial() >= 3) {
//stop service advertisements
stopService();
delete(wsm);
}
else {
scheduleAt(simTime()+1, wsm);
}
}
else {
BaseWaveApplLayer::handleSelfMsg(msg);
}
}
PROBLEM: With this setting, the simulation is very very slow. I get about 50 simulation seconds in 5-6 hours or more in Express mode in OMNET GUI. (No. of RSU: 64, Number of Vehicle: 40, around 1kmx1km map)
Also, I am referring to this post. The OP says that he got faster speed by removing the sending of message after each RSU received a message. In my case I cannot remove that, because I need to send out the broadcast messages after each interval.
Question: I think that this slowness is because every node is trying to sendDown messages at the beginning of each simulated second. Is it the case that when all vehicles and nodes schedules and sends message at the same time OMNET slows down? (Makes sense to slow down , but by what degree) But, there are only around 100 nodes overall in the simulation. Surely it cannot be this slow.
What I tried : I tried using sendDelayedDown(wsm->dup(), simTime()+uniform(0.1,0.5)); to spread the sending of the messages through out 1st half of each simulated second. This seems to stop messages piling up at the beginning of each simulation seconds and sped things up a bit, but not so much overall.
Can anybody please let me know if this is normal behavior or whether I am doing something wrong.
Also I apologize for the long post. I could not explain my problem without giving the context.
It seems you are flooding your network with messages: Every message from an RSU gets duplicated and transmitted again by every Car which has received this message. Hence, the computational time increases quadratically with the number of nodes (sender of messages) in your network (every sent message has to be handled by every node which is in range to receive it). The limit of 3 transmissions per message does not seem to help much and, as the comment in the code indicates, is not used at all, if there is no channel switching.
Therefore, if you can not improve/change your code to simply send less messages, you have to live with that. Your little tweak to send the messages in a delayed manner only distributes the messages over one second but does not solve the problem of flooding.
However, there are still some hints you can follow to improve the performance of your simulation:
Compile in release mode: make MODE=Release
Run your simulation in the terminal environment Cmdenv: ./run -u Cmdenv ...
If you need to use the graphical environment by all means, you should at least speed up the animations by using the slider in the upper part of the interface.
Removing the simtime-resolution parameter from the omnetpp.ini file solves the problem.
It seems the simulation kernel has an issue when the channel delay does not match the simulation-time resolution.
You can verify the solution by cloning the following repository. Note that you need a functional installation of the OMNeT++ framework. Specifically, I test this fix in OMNeT++ 5.6.2.
https://github.com/Ryuuba/flooding

measuring cross process latency on windows

I am building latency measurement into a communication middleware I am building. The way I have it working is that I periodically send a probe msg from my publishing apps. Subscribing apps receive this probe, cache it, and send an echo back at a time of their choosing, noting how much time the msg was kept “on hold”. The subscribing app receives these echos and calculates latency as (now() – time_sent – time_on_hold) / 2.
This kinda works, but the numbers are vastly different (3x) when “time on hold” is greater than 0. I.e if I echo the msg back immediately I get around 50us on my dev env and if I wait, then send the msg back the time jumps to 150us (though I discount whatever time I was on hold). I use QueryPerfomanceCounter for all measurements.
This is all inside a single Windows 7 box. What am I missing here?
TIA.
A bit more information. I am using the following to measure time:
static long long timeFreq;
static struct Init
{
Init()
{
QueryPerformanceFrequency((LARGE_INTEGER*) &timeFreq);
}
} init;
long long OS::now()
{
long long result;
QueryPerformanceCounter((LARGE_INTEGER*)&result);
return result;
}
double OS::secondsDiff(long long ts1, long long ts2)
{
return (double) (ts1-ts2)/timeFreq;
}
On the publish side I do something like:
Probe p;
p.sentTimeStamp = OS::now();
send(p);
Response r = recv();
latency=OS::secondsDiff(OS::now()- r.sentTimeStamp) - r.secondsOnHoldOnReceiver;
And on the receiver side:
Probe p = recv();
long long received = OS::now();
sleep();
Response r;
r.sentTimeStamp = p.timeStamp;
r.secondOnHoldOnReceiver = OS::secondsDiff(OS::now(), received);
send(r);
Ok, I have edited my answer to reflect your answer: Sorry for the delay, but I didn't notice that you had elaborated on the question by creating an answer.
It's seems that functionally you are doing nothing wrong.
I think that when you distribute your application outside of localhost conditions, the additional 100us (if it is indeed roughly constant) will pale into insignificance compared to the average latency of a functioning network.
For the purposes of answering your question am thinking that there is a thread/interrupt scheduling issue on the server side that needs to be investigated, as you do not seem to be doing anything on the client that is not accounted for.
Try the following test scenario:
Send two Probes to clients A and B. (all localhost)
Send the Probe to 'Client B' one second (or X/2 seconds) after you send the probe to Client A.
Ensuring that 'Client A' waits for two seconds (or X seconds) and 'Client B' waits 'one second (or X/2 seconds)
The idea being that hopefully, both clients will send back their probe answers at roughly the same time and both after a sleep/wait (performing the action that exposes the problem). The objective is to try to get one of the clients responses to 'wake up' the publisher to see if the next clients answer will be processed immediately.
If one of these returned probes is not showing the anomily (most likely the second response) it could point to the fact that the publisher thread is waking from a sleep cycle (on recv 1st responce) and is immediately available to process the second response.
Again, if it turns out that the 100us delay is roughly constant, it will be +-10% of 1ms which is the timeframe appropriate for realworld network conditions.

Resources