from https://learn.microsoft.com/en-us/windows/desktop/fileio/i-o-completion-ports:
Please note that while the packets are queued in FIFO order they may be dequeued in a different order.
Is this not guaranteed even in the case of a single queuing thread and a single dequing thread ?
Is there any further information under what condition packets can be dequeued in a different order ?
Related
In raft, the leader
receipt requests,
escape log entries,
send RPCs,
apply to state machine
and finally response the clients.
this process need some time,so ,how to deal with next requests?refuse them ?
The point of Raft is that all of the participants that are still working agree on what the state of the system is (or at least they should do once they have time to find out what the total consensus is). This means that they all agree on what messages have been received, and in what order. This also means that they must all get the same answer when they compute the consequences of receiving those messages. So the messages must be processed sequentially, or if they are processed in parallel, the participants have to use transactions and locking and so on so that the effect is as if the messages were processed sequentially. Under load, responses can be delayed, or some other sort of back-pressure used to get the senders to slow down, but you can't just drop messages because you are too busy, unless you do it in a way that ensures that all participants take the same decisions about this.
Most raft implementation are using pipelining you can several log entry in flight from master to slave.
But the master only respond to client write request with success after the master received a ACK response from a quorum of slave for a log offset equal or greater than the log offset this client request was written to.
I was using kafka-storm to connect kafka and storm. I have 3 servers running zookeeper, kafka and storm. There is a topic 'test' in kafka that has 9 partitions.
In the storm topology, the number of KafkaSpout executor is 9 and by default, the number of tasks should be 9 as well. And the 'extract' bolt is the only bolt connected to KafkaSpout, the 'log' spout.
From the UI, there is a huge rate of failure in the spout. However, he number of executed message in bolt = the number of emitted message - the number of failed mesage in bolt. This equation is almost matched when the failed message is empty at the beginning.
Based on my understanding, this means that the bolt did receive the message from spout but the ack signals are suspended in flight. That's the reason why the number of acks in spout are so small.
This problem might be solved by increase the timeout seconds and spout pending message number. But this will cause more memory usage and I cannot increase it to infinite.
I was wandering if there is a way to force storm ignore the ack in some spout/bolt, so that it will not waiting for that signal until time out. This should increase the throughout significantly but not guarantee for message processing.
if you set the number of ackers to 0 then storm will automatically ack every sample.
config.setNumAckers(0);
please note that the UI only measures and shows 5% of the data flow.
unless you set
config.setStatsSampleRate(1.0d);
try increasing the bolt's timeout and reducing the amount of topology.max.spout.pending.
also, make sure the spout's nextTuple() method is non blocking and optimized.
i would also recommend profiling the code, maybe your storm Queues are being filled and you need to increase their sizes.
config.put(Config.TOPOLOGY_TRANSFER_BUFFER_SIZE,32);
config.put(Config.TOPOLOGY_EXECUTOR_RECEIVE_BUFFER_SIZE,16384);
config.put(Config.TOPOLOGY_EXECUTOR_SEND_BUFFER_SIZE,16384);
Your capacity numbers are a bit high, leading me to believe that you're really maximizing the use of system resources (CPU, memory). In other words, the system seems to be bogged down a bit and that's probably why tuples are timing out. You might try using the topology.max.spout.pending config property to limit the number of inflight tuples from the spout. If you can reduce the number just enough, the topology should be able to efficiently handle the load without tuples timing out.
I'm using JmsSpout and BaseBasicBolt.
All bolts acked, but a lot(about half) of spout fail.
What's the possible reason?
or how can I log why the message fail? the JmsSpout just print which message fail, without error info.
If you do not actively fail() tuples in bolts, the only reason for failing tuples is a timeout. The default timeout in Storm is 30 seconds (you can configure it via TOPOLOGY_MESSAGE_TIMEOUT_SECS). If a tuples get emitted, Storm wait for the timeout duration to receive an ack. If not ack is received within this duration, Strom fails the tuple.
Increasing the timeout can fix the problem (you should set the timeout to a larger value than your expected processing latency).
If your expected processing latency is already lower then the timeout value, it indicates that you have a bottleneck in you topology. Thus, one (or multiple) operators cannot process incoming tuples fast enough. Therefore, incoming tuples get buffered in input queues increasing there latency as the queue is growing over time. You need to increase the parallelism for those bottleneck operators to resolve the issue.
If it's linked to another bolt, but no instances of the next bolt are available for a while. How long will it hang around? Indefinitely? Long enough?
How about if many tuples are waiting, because there is a line or queue for the next available bolt. Will they merge? Will bad things happen if too many get backed up?
By default tuples will timeout after 30 seconds after being emitted; You can change this value, but unless you know what you are doing don't do it (topology.message.timeout.secs)
Failed and timeout out tuples will be replayed by the spout, if the spout is reading from a reliable data source (eg. kafka); this is, storm has guaranteed message processing. If you are codding your own spouts, you might want to dig deep into this.
You can see if you are having timeout tuples on storm UI, when tuples are failing on the spout but not on the bolts.
You don't want tuples to timeout inside your topology (for example there is a performance penalty on kafka for not reading sequential). You should adjust the capacity of your topology process tuples (this is, tweak the bolt parallelism by changing the number of executors) and by setting the parameter topology.max.spout.pending to a reasonable conservative value.
increase the topology.message.timeout.secs parameter is no real solution, because soon or late if the capacity of your topology is not enough the tuples will start to fail.
topology.max.spout.pending is the max number of tuples that can be waiting. The spout will emit more tuples as long the number of tuples not fully processed is less than the given value. Note that the parameter topology.max.spout.pending is per spout (each spout has it's internal counter and keeps track of the tuples which are not fully processed).
There is a deserialize-queue for buffering the coming tuples, if it hangs long enough, the queue will be full,and tuples will be lost if you don't use the ack function to make sure it will be resent.
Storm just drops them if the tuples are not consumed until timeout. (default is 30 seconds)
After that, Storm calls fail(Object msgId) method of Spout. If you want to replay the failed tuples, you should implement this function. You need to keep the tuples in memory, or other reliable storage systems, such as Kafka, to replay them.
If you do not implement the fail(Object msgId) method, Storm just drops them.
Reference: https://storm.apache.org/documentation/Guaranteeing-message-processing.html
I want to understand how ApacheMQ's prefetch limit works. Are all the messages sent in one burst? What if there are concurrent consumers, what happens then?
What is the difference between prefetch limit of 0 and 1?
Read the link recommended by #Tim Bish -- the quotes I offer are from that page.
So ActiveMQ uses a prefetch limit on how many messages can be streamed
to a consumer at any point in time. Once the prefetch limit is
reached, no more messages are dispatched to the consumer until the
consumer starts sending back acknowledgements of messages (to indicate
that the message has been processed). The actual prefetch limit value
can be specified on a per consumer basis.
Specifically on the 0 versus 1 prefetch limit difference:
If you have very few messages and each message takes a very long time
to process you might want to set the prefetch value to 1 so that a
consumer is given one message at a time. Specifying a prefetch limit
of zero means the consumer will poll for more messages, one at a time,
instead of the message being pushed to the consumer.