Many producers single consumer fair job scheduling in Golang - go

I have multiple producers that stage objects (jobs) for processing, and a single consumer that takes objects one-by-one. I need to design a sort of a scheduler in golang.
Scheduling is asynchroneous, i.e. each producer works in a separate gorourine.
Scheduler interface is "good" in terms of golang-way (I'm new in Go).
A producer can remove or replace its staged object (if not yet consumed) with zero or minimal lost in the position in a queue. If a producer misses its slot because it canceled and then restaged an object, it still keeps a privilege to stage as soon as possible early till the end of the particular round.
"Fair" scheduling between producers.
Customizable multi-level weighting/prioritization
I'd like some hints and examples on right design of such a scheduler.
I feel I need every producer to wait for a token in a channel, then write (or don't write) an object to a shared consumer channel, then dispose the token, so it is routed to a next producer. Still, I'm not shure this is the best approach. Besides, it takes 3 sequential syncrhoneous operations per producer, so I'm afraid I'll have performance pitfalls because of the token traveling too slowly between producers. Also, 3 steps for one operation is probably not a good golang-way.

Related

Golang: How to tell whether producer or consumer is slower when communicating via buffered channels?

I have an app in Golang where I have a pipeline setup where each component performs some work, then pass along its results to another component via a buffered channel, then that component performs some work on its input then pass along its results to yet another component via another buffered channel, and so on. For example:
C1 -> C2 -> C3 -> ...
where C1, C2, C3 are components in the pipeline and each "->" is a buffered channel.
In Golang buffered channels are great because it forces a fast producer to slow down to match its downstream consumer (or a fast consumer to slow down to match its upstream producer). So like an assembly line, my pipeline is moving along as fast as the slowest component in that pipeline.
The problem is I want to figure out which component in my pipeline is the slowest one so I can focus on improving that component in order to make the whole pipeline faster.
The way that Golang forces a fast producer or a fast consumer to slow down is by blocking the producer when it tries to send to a buffered channel that is full, or when a consumer tries to consume from a channel that is empty. Like this:
outputChan <- result // producer would block here when sending to full channel
input := <- inputChan // consumer would block here when consuming from empty channel
This makes it hard to tell which one, the producer or consumer, is blocking the most, and thus the slowest component in pipeline. As I cannot tell how long it is blocking for. The one that is blocking the most amount of time is the fastest component and the one that is blocking the least (or not blocking at all) is the slowest component.
I can add code like this just before the read or write to channel to tell whether it would block:
// for producer
if len(outputChan) == cap(outputChan) {
producerBlockingCount++
}
outputChan <- result
// for consumer
if len(inputChan) == 0 {
consumerBlockingCount++
}
input := <-inputChan
However, that would only tell me the number of times it would block, not the total amount of time it is blocked. Not to mention the TOCTOU issue where the check is for a single point in time where state could change immediately right after the check rendering the check incorrect/misleading.
Anybody that has ever been to a casino knows that it's not the number of times that you win or lose that matters, it's the total amount of money that you win or lose that's really matter. I can lose 10 hands with $10 each (for a total of $100 loss) and then wins one single hand of $150, I would still comes out ahead.
Likewise, it's not the number of times that a producer or consumer is blocked that's meaningful. It's the total amount of time that a producer or consumer is blocked that's the determining factor whether it's the slowest component or not.
But I cannot think of anyway to determine the total amount that something is blocked at the reading to / writing from a buffered channel. Or my google-fu isn't good enough. Anyone has any bright idea?
There are several solutions that spring to mind.
1. stopwatch
The least invasive and most obvious is to just note the time,
before and after,
each read or write.
Log it, sum it, report on total I/O delay.
Similarly report on elapsed processing time.
2. benchmark
Do a synthetic bench,
where you have each stage operate on a million
identical inputs, producing a million identical outputs.
Or do a "system test" where you wiretap the
messages that flowed through production,
write them to log files,
and replay relevant log messages to each
of your various pipeline stages,
measuring elapsed times.
Due to the replay, there will be no I/O throttling.
3. pub/sub
Re-architect to use a higher overhead
comms infrastructure, such as Kafka / 0mq / RabbitMQ.
Change the number of nodes participating
in stage-1 processing, stage-2, etc.
The idea is to overwhelm the stage currently
under study, no idle cycles, to measure
its transactions / second throughput
when saturated.
Alternatively, just distribute each stage
to its own node, and measure {user, sys, idle} times,
during normal system behavior.

Kafka: is it better to have a lot of small messages or fewer, but bigger ones?

There is a microservice, which receives the batch of the messages from the outside and push them to kafka. Each message is sent separately, so for each batch I have around 1000 messages 100 bytes each. It seems like the messages take much more space internally, because the free space on the disk going down much faster than I expected.
I'm thinking about changing the producer logic, the way it will put all the batch in one message (the consumer then will split them by itself). But I haven't found any information about space or performance issues with many small messages, neither any guildlines about balance between size and count. And I don't know Kafka enough to have my own conclusion.
Thank you.
The producer will, by itself, batch messages that are destined to the same partition, in order to avoid unnecesary calls.
The producer makes this thanks to its background threads. In the image, you can see how it batches 3 messages before sending them to each partition.
If you also set compression in the producer-side, it will also compress (GZip, LZ4, Snappy are the valid codecs) the messages before sending it to the wire. This property can also can be set on the broker-side (so the messages are sent uncompressed by the producer, and compressed by the broker).
It depends on your network capacity to decide wether you prefer a slower producer (as the compression will slow it) or bigger load on the wire. Note that setting a big compression level on big files may affect a lot your overall performance.
Anyway, I believe the big/small msg problem hurts a lot more to the consumer side; Sending messages to Kafka is easy and fast (the default behaviour is async, so the producer won't be too busy). But on the consumer side, you'll have to look the way you are processing the messages:
One Consumer-Worker
Here you couple consuming with processing. This is the simplest way: the consumer sets its own thread, reads a kafka msg and process it. Then continues the loop.
One Consumer - Many workers
Here you decouple consuming and processing. In most cases, reading from kafka will be faster than the time you need to process the message. It is just physics. In this approach, one consumer feeds many separate worker threads that share the processing load.
More info about this here, just above the Constructors area.
Why do I explain this? Well, if your messages are too big, and you choose the first option, your consumer may not call poll() within the timeout interval, so it will rebalance continuosly. If your messages are big (and take some time to be processed), better choose to implement the second option, as the consumer will continue its own way, calling poll() without falling in rebalances.
If the messages are too big and too many, you may have to start thinking about different structures than can buffer the messages into your memory. Pools, deques, queues, for example, are different options to acomplish this.
You may also increase the poll timeout interval. This may hide you about dead consumers, so I don't really recommend it.
So my answer would be: it depends, basicallty on: your network capacity, your required latency, your processing capacity. If you are able to process big messages equally fast as smaller ones, then I wouldn't care much.
Maybe if you need to filter and reprocess older messages I'd recommend partitioning the topics and sending smaller messages, but it's only a use-case.

Achieve concurrency in Kafka consumers

We are working on parallelising our Kafka consumer to process more number of records to handle the Peak load. One way, we are already doing is through spinning up as many consumers as many partitions within the same consumer group.
Our Consumer deals with making an API call which is synchronous as of now. We felt making this API call asynchronous will make our consumer handle more load. Hence, we are trying to making the API call Asynchronous and in its response we are increasing the offset. However we are seeing an issue with this:
By making the API call Asynchronous, we may get the response for the last record first and none of the previous record's API calls haven't initiated or done by then. If we commit the offset as soon as we receive the response of the last record, the offset would get changed to the last record. In the meantime if the consumer restarts or partition rebalances, we will not receive any record before the last record we committed the offset as. With this, we will miss out the unprocessed records.
As of now we already have 25 partitions. We are looking forward to understand if someone have achieved parallelism without increasing the partitions or increasing the partitions is the only way to achieve parallelism (to avoid offset issues).
First, you need to decouple (if only at first) the reading of the messages from the processing of these messages. Next look at how many concurrent calls you can make to your API as it doesn't make any sense to call it more frequently than the server can handle, asynchronously or not. If the number of concurrent API calls is roughly equal to the number of partitions you have in your topic, then it doesn't make sense to call the API asynchronously.
If the number of partitions is significantly less than the max number of possible concurrent API calls then you have a few choices. You could try to make the max number of concurrent API calls with fewer threads (one per consumer) by calling the API's asynchronously as you suggest, or you can create more threads and make your calls synchronously. Of course, then you get into the problem of how can your consumers hand their work off to a greater number of shared threads, but that's exactly what streaming execution platforms like Flink or Storm do for you. Streaming platforms (like Flink) that offer checkpoint processing can also handle your problem of how to handle offset commits when messages are processed out of order. You could roll your own checkpoint processing and roll your own shared thread management, but you'd have to really want to avoid using a streaming execution platform.
Finally, you might have more consumers than max possible concurrent API calls, but then I'd suggest that you just have fewer consumers and share partitions, not API calling threads.
And, of course, you can always change the number of your topic partitions to make your preferred option above more feasible.
Either way, to answer your specific question you want to look at how Flink does checkpoint processing with Kafka offset commits. To oversimplify (because I don't think you want to roll your own), the kafka consumers have to remember not only the offsets they just committed, but they have to hold on to the previous committed offsets, and that defines a block of messages flowing though your application. Either that block of messages in its entirety is processed all the way through or you need to rollback the processing state of each thread to the point where the last message in the previous block was processed. Again, that's a major oversimplification, but that's kinda how it's done.
You have to look at kafka batch processing. In a nutshell: you can setup huge batch.size with a little number (or even single) of partitions. As far, as whole batch of messages consumed at consumer side (i.e. in ram memory) - you can parallelize this messages in any way you want.
I would really like to share links, but their number rolls over the web hole.
UPDATE
In terms of committing offsets - you can do this for whole batch.
In general, kafka doesn't achieve target performance requirements by abusing partitions number, but rather relying on batch processing.
I already saw a lot of projects, suffering from partitions scaling (you may see issues later, during rebalancing for example). The rule of thumb - look at every available batch setting first.

Spring Cloud Stream: Proper handling for StreamListener when it takes a long time to process the message

When StreamListener is taking a long time (longer than max.poll.interval.ms) to process a message, thus that particular consumer is occupied and other new messages will be assigned to other partitions. After the time is greater than max.poll.interval.ms, rebalance happened and the same situation will happened to another consumer. So this message will circulate around all the partitions and keep on hogging the resources.
However, this situation is not happening very often, only a few messages somehow is taking such a long time to process and it's uncontrollable.
Can we commit the offset and throw it to DLQ after a few times of rebalancing? If yes, how can we do that? If no, what's the proper handling for this kind of situation?
Increasing max.poll.interval.ms will have no impact on performance (except it will take longer to detect a consumer that is really dead).
Going through a rebalance each time you process this "bad" record is much more damaging to performance.
You can, however, do what you want with a custom SeekToCurrentErrorHandler together with a recoverer such as the DeadLetterPublishingRecoverer. You would also need a rebalance listener to count the rebalances and some mechanism to share the state from the error handler across instances (the standard one only keeps state in memory).
Quite complicated, I think.

Two processes single producer / single consumer in Windows. What is better Mutex, Event or Semaphore

I could use either primitive to make it works, but I wonder from a performance perspective, which one is more adequate for such a scenario.
I need to synchronize only two processes. There are always two, no more, no less. One Writes to a memory mapped file while the other reads from it in a producer / consumer fashion. I care about performance, and given how simple the scenario is, I think I could use something light weight, but I dont know for sure which one is faster but still adequate for this scenario.
First point: they're all kernel objects so all of them involve a switch from user mode to kernel mode. That imposes enough overhead by itself that you're unlikely to notice any real difference between them in terms of speed or anything like that. Therefore, which one is preferable will depend a great deal upon how you're structuring the data in the shared memory region, and how you use it.
Let's start with what would probably be the simplest case: that the shared memory region forms the bottleneck. All the time that the consumer isn't reading, the producer will be writing and vice versa. At least initially, this seems like a case were we can use a single mutex. The producer waits on the mutex, writes data, releases the mutex. The consumer waits on the mutex, reads data, releases the mutex. This continues until everything is done.
Unfortunately, while this protects against the producer and consumer using the shared region at the same time, it does not ensure proper operation. For example: the producer writes a buffer full of information, then releases the mutex. Then it waits on the mutex again, so when the reader is done it can write more data -- but at that point, there's no guarantee that the consumer will be the next one to get the mutex. The producer might get it back immediately, and write more data over what it just produced, so the consumer will never see the previous data.
One way to prevent that would be to use a couple of events: one from the producer to the consumer to say that there's data waiting to be read, and the other from the consumer to the producer to say all the data in the buffer has been read. In this case, the producer waits on its event, which the consumer will only set when it's done reading data. The producer then writes some data, and signals the consumer's event to say some data is ready. The consumer reads the data, and then signals event to the producer so the cycle can continue.
As long as you only have a single producer and single consumer and treat the entire as a single "chunk" of data that's controlled together, that's adequate. That, however, can lead to a problem. Let's consider, for example, a web server front-end as the producer and back-end as the consumer (and some separate mechanism for passing results back to the web server). If the buffer is small enough to only hold one request, the producer may have to buffer up several incoming requests as the consumer is processing one. Each time the consumer is ready to process a request, the producer has to stop what it's doing, copy a request to the buffer, and let the consumer know it can proceed.
The basic point of separate processes, however, is to let each proceed on its own schedule as much as possible. To allow that, we might make room in our shared buffer for a number of requests. At any given time, some number of those slots will full (or, looking at it from the other direction, some number will be free). For this case, we just about need a counted semaphore to track those slots. The producer can write something any time at least one slot is free. The consumer can read something anytime at least one slot is filled.
Bottom line: the choice isn't about speed. It's about how your use/structure the data and the processes' access to it. Assuming it's really as simple as you describe, the pair of events is probably the simplest mechanism that will work.

Resources