RemoteTransportException[[Death][inet[/172.18.0.9:9300]][bulk/shard]]; nested: EsRejectedExecutionException[rejected execution (queue capacity 50) on org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1#12ae9af];
Does this mean I'm doing too many operations in one bulk at one time, or too many bulks in a row, or what? Is there a setting I should be increasing or something I should be doing differently?
One thread suggests "I think you need to increase your 'threadpool.bulk.queue_size' (and possibly 'threadpool.index.queue_size') setting due to recent defaults." However, I don't want to arbitrarily increase a setting without understanding the fault.
I lack the reputation to reply to the comment as a comment.
It's not exactly the number of bulk requests made, it is actually the total number of shards that will be updated on a given node by the bulk calls. This means the contents of the actual bulk operations inside the bulk request actually matter. For instance, if you have a single node, with a single index, running on an 8 core box, with 60 shards and you issue a bulk request that has indexing operations that affects all 60 shards, you will get this error message with a single bulk request.
If anyone wants to change this, you can see the splitting happening inside of org.elasticsearch.action.bulk.TransportBulkAction.executeBulk() near the comment "go over all the request and create a ShardId". The individual requests happen a few lines down around line 293 on version 1.2.1.
You want to up the number of bulk threads available in the thread pool. ES sets aside threads in several named pools for use on various tasks. These pools have a few settings; type, size, and queue size.
from the docs:
The queue_size allows to control the size of the queue of pending
requests that have no threads to execute them. By default, it is set
to -1 which means its unbounded. When a request comes in and the queue
is full, it will abort the request.
To me that means you have more bulk requests queued up waiting for a thread from the pool to execute one of them than your current queue size. The documentation seems to indicate the queue size is defaulted to both -1 (the text above says that) and 50 (the call out for bulk in the doc says that). You could take a look at the source to be sure for your version of es OR set the higher number and see if your bulk issues simply go away.
ES thread pool settings doco
elasticsearch 1.3.4
our system 8 core * 2
4 bulk worker each insert 300,000 message per 1 min => 20,000 per sec
i'm also that exception! then set config
elasticsearch.yml
threadpool.bulk.type: fixed
threadpool.bulk.size: 8 # availableProcessors
threadpool.bulk.queue_size: 500
source
BulkRequestBuilder bulkRequest = es.getClient().prepareBulk();
bulkRequest.setReplicationType (ReplicationType.ASYNC).setConsistencyLevel(WriteConsistencyLevel.ONE);
loop begin
bulkRequest.add(es.getClient().prepareIndex(esIndexName, esTypeName).setSource(document.getBytes ("UTF-8")));
loop end
BulkResponse bulkResponse = bulkRequest.execute().actionGet();
4core => bulk.size 4
then no error
I was having this issue and my solution ended up being increasing ulimit -Sn and ulimit Hn for the elasticsearch user. I went from 1024 (default) to 99999 and things cleaned right up.
Related
My Opensearch sometimes reaches the error "429 Too Many Requests" when writing data. I know there is a queue, when the queue is full it will show that error. So is there any Api to check that bulk queue status, current size...? Example: queue 150/200 (nearly full)
Yes, you can use the following API call
GET _cat/thread_pool?v
You will get something like this, where you can see the node name, the thread pool name (look for write), the number of active requests currently being carried out, the number of requests waiting in the queue and finally the number of rejected requests.
node_name name active queue rejected
node01 search 0 0 0
node01 write 8 2 0
The write queue can handle as many requests as 1 + number of CPUs, i.e. as many can be active at the same time. If active is full and new requests come in, they go directly in the queue (default size 10000). If active and queue are full, requests start to be rejected.
Your mileage may vary, but when optimizing this, you're looking at:
keeping rejected at 0
minimizing the number of requests in the queue
making sure that active requests get carried out as fast as possible.
Instead of increasing the queue, it's usually preferable to increase the number of CPU. If you have heavy ingest pipelines kicking in, it's often a good idea to add ingest nodes whose goal will be to execute that pipeline instead of on the data node.
My spring boot application is going to listen to 1 million records an hour from a kafka broker. The entire processing logic for each message takes 1-1.5 seconds including a database insert. Broker has 64 partitions, which is also the concurrency of my #KafkaListener.
My current code is only able to process 90 records in a minute in a lower environment where I am listening to around 50k records an hour. Below is the code and all other config parameters like max.poll.records etc are default values:
#KafkaListener(id="xyz-listener", concurrency="64", topics="my-topic")
public void listener(String record) {
// processing logic
}
I do get "it is likely that the consumer was kicked out of the group" 7-8 times an hour. I think both of these issues can be solved through isolating listener method and multithreading processing of each message but I am not sure how to do that.
There are a few points to consider here. First, 64 consumers seems a bit too much for a single application to handle consistently.
Considering each poll by default fetches 500 records per consumer at a time, your app might be getting overloaded and causing the consumers to get kicked out of the group if a single batch takes more than the 5 minutes default for max.poll.timeout.ms to be processed.
So first, I'd consider scaling the application horizontally so that each application handles a smaller amount of partitions / threads.
A second way to increase throughput would be using a batch listener, and handling processing and DB insertions in batches as you can see in this answer.
Using both, you should be processing a sensible amount of work in parallel per app, and should be able to achieve your desired throughput.
Of course, you should load test each approach with different figures to have proper metrics.
EDIT: Addressing your comment, if you want to achieve this throughput I wouldn't give up on batch processing just yet. If you do the DB operations row by row you'll need a lot more resources for the same performance.
If your rule engine doesn't do any I/O you can iterate each record from the batch through it without losing performance.
About data consistency, you can try some strategies. For example, you can have a lock to ensure that even through a rebalance only one instance will process a given batch of records at a given time - or perhaps there's a more idiomatic way of handling that in Kafka using the rebalance hooks.
With that in place, you can batch load all the information you need to filter out duplicated / outdated records when you receive the records, iterate each record through the rule engine in memory, and then batch persist all results, to then release the lock.
Of course, it's hard to come up with an ideal strategy without knowing more details about the process. The point is by doing that you should be able to handle around 10x more records within each instance, so I'd definitely give it a shot.
I am using the DetectDuplicate processor within a flow but am seeing some confusing behavior. The processor is configured as follows:
Cache Entry Identifier: ${rk.id}
FlowFile Description: Empty string set
Age Off Duration: 10s
Distributed Cache Service: DistributedMapCacheClientService
Cache The Entry Identifier: true
The "duplicate" relationship is automatically terminated. Concurrency is set to 1.
However, I'm seeing multiple copies of flowfiles on the output queue with the same rk.id that were run through the processor less than 2 seconds apart. How is this possible? I even tried increasing the age off to 5m and it made no difference. I also tried setting the processor to only run every 500ms, thinking there may be some delay in writing to the cache, and 2 flowfiles that were processed 1s apart with the same rk.id showed up in the output queue. What am I missing?
I think I figured this out. It looks like the cache was full and not accepting new values? Because we had a lot less traffic this morning and it seems to have properly run the deduplication.
I am using google-api-ruby-client for Streaming Data Into BigQuery. so whenever there is a request. it is pushed into Redis as a queue & then a new Sidekiq worker tries to insert into bigquery. I think its involves opening a new HTTPS connection to bigquery every insert.
the way, I have it setup is:
Events post every 1 second or when the batch size reaches 1MB (one megabyte), whichever occurs first. This is per worker, so the Biquery API may receive tens of HTTP posts per second over multiple HTTPS connections.
This is done using the provided API client by Google.
Now the Question -- For Streaming inserts, what is the better approach:-
persistent HTTPS connection. if yes, then should it be a global connection that's shared across all requests? or something else?
Opening new connection. like we are doing now using google-api-ruby-client
I think it's pretty much to early to speak about these optimizations. Also other context is missing like if you exhausted the kernel's TCP connections or not. Or how many connections are in TIME_WAIT state and so on.
Until the worker pool doesn't reach 1000 connections per second on the same machine, you should stick with the default mode the library offers
Otherwise this would need lots of other context and deep level of understanding how this works in order to optimize something here.
On the other hand you can batch more rows into the same streaming insert requests, the limits are:
Maximum row size: 1 MB
HTTP request size limit: 10 MB
Maximum rows per second: 100,000 rows per second, per table.
Maximum rows per request: 500
Maximum bytes per second: 100 MB per second, per table
Read my other recommendations
Google BigQuery: Slow streaming inserts performance
I will try to give also context to better understand the complex situation when ports are exhausted:
Let's say on a machine you have a pool of 30,000 ports and 500 new connections per second (typical):
1 second goes by you now have 29500
10 seconds go by you now have 25000
30 seconds go by you now have 15000
at 59 seconds you get to 500,
at 60 you get back 500 and stay at using 29500 and that keeps rolling at
29500. Everyone is happy.
Now say that you're seeing an average of 550 connections a second.
Suddenly there aren't any available ports to use.
So, your first option is to bump up the range of allowed local ports;
easy enough, but even if you open it up as much as you can and go from
1025 to 65535, that's still only 64000 ports; with your 60 second
TCP_TIMEWAIT_LEN, you can sustain an average of 1000 connections a
second. Still no persistent connections are in use.
This port exhaust is better discussed here: http://www.gossamer-threads.com/lists/nanog/users/158655
I want to send multiple bulk operation requests to ElasticSearch cluster, and I come across this issue EsRejectedExecutionException[rejected execution (queue capacity 50) on org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction
I have a cluster of 4 ElasticSearch instances (version 1.3.4), when I sent this request to get the number of its bulk operation thread pool size:
GET /_cat/thread_pool?v&h=host,bulk.active,bulk.queueSize
I got
host bulk.active bulk.queueSize
1D4HPY1 0 50
1D4HPY2 0 50
1D4HPY3 0 50
1D4HPY4 0 50
So how many simultaneous bulk operation requests I can send to that cluster? 50 or 200?
I would suggest having a look at this section from the documentation.
Also, you need to be more specific when you say "simultaneous requests that you can send" because, as you see in the page above, there are various thread pools that handle various jobs. You give an example in your post for "bulk" operations.
In my opinion, the correct request for "bulk" to see the number of simultaneous running threads (as per this piece of documentation) is GET /_cat/thread_pool?v&h=host,bulk.queueSize,bulk.min,bulk.max. So, you have bulk.max active threads allowed in the thread pool with a bulk.queueSize number of tasks in the queue for it. When a request comes in and there are no threads to handle it, the request is put in queue to wait.