StackExchange.redis - Redis Timeout On Batch - stackexchange.redis

I'm trying to Insert 1M keys of hashes to redis using batch insertion.
When I do that, several thousands of keys are not being inserted, and I got RedisTimeoutException.
Here is my code:
IDatabase db = RedisDB.Instance;
List<Task> tasks = new List<Task>();
var batch = db.CreateBatch();
foreach (var itemKVP in items)
{
HashEntry[] hash = RedisConverter.ToHashEntries(itemKVP.Value);
tasks.Add(batch.HashSetAsync(itemKVP.Key, hash));
}
batch.Execute();
Task.WaitAll(tasks.ToArray());
And then I get this exception:
RedisTimeoutException: Timeout awaiting response (outbound=463KiB, inbound=10KiB, 100219ms elapsed, timeout is 5000ms), command=HMSET, next: HMSET *****:*****:1390194, inst: 0, qu: 0, qs: 110, aw: True, rs: DequeueResult, ws: Writing, in: 0, in-pipe: 1045, out-pipe: 0, serverEndpoint: 10.7.3.36:6379, mgr: 9 of 10 available, clientName: DataCachingService:DEV25S, IOCP: (Busy=0,Free=1000,Min=8,Max=1000), WORKER: (Busy=4,Free=32763,Min=8,Max=32767), Local-CPU: 0%, v: 2.0.601.3402 (Please take a look at this article for some common client-side issues that can cause timeouts: https://stackexchange.github.io/StackExchange.Redis/Timeouts)
I read the article and I didn’t succeed to solve the problem.

Related

SockJS increase pool size

I am using SocketJS and Stomp to send files over a backend api for being process.
My problem is that the upload function get stuck if more than two upload are done at the same time.
Ex:
User 1 -> upload a file -> backend is receiving correctly the file
User 2 -> upload a file -> backend is receiving correctly the file
User 3 -> upload a file -> the backend is not called until one of the
previous upload hasn't completed.
(after a minute User 1 complete its upload and the third upload starts)
The error I can see through the log is the following:
2021-06-28 09:43:34,884 INFO [MessageBroker-1] org.springframework.web.socket.config.WebSocketMessageBrokerStats.lambda$initLoggingTask$0: WebSocketSession[11 current WS(5)-HttpStream(6)-HttpPoll(0), 372 total, 26 closed abnormally (26 connect failure, 0 send limit, 16 transport error)], stompSubProtocol[processed CONNECT(302)-CONNECTED(221)-DISCONNECT(0)], stompBrokerRelay[null], **inboundChannel[pool size = 2, active threads = 2**, queued tasks = 263, completed tasks = 4481], outboundChannel[pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 607], sockJsScheduler[pool size = 1, active threads = 1, queued tasks = 14, completed tasks = 581444]
It seems clear that the pool size is full:
inboundChannel[pool size = 2, active threads = 2
but I really cannot find a way to increase the size.
This is the code:
Client side
ws = new SockJS(host + "/createTender");
stompClient = Stomp.over(ws);
Server side configuration
#EnableWebSocketMessageBroker
public class WebSocketBrokerConfig extends AbstractWebSocketMessageBrokerConfigurer {
...
...
#Override
public void configureWebSocketTransport(WebSocketTransportRegistration registration) {
registration.setMessageSizeLimit(100240 * 10240);
registration.setSendBufferSizeLimit(100240 * 10240);
registration.setSendTimeLimit(20000);
}
I've already tried with changing the configureWebSocketTransport parameters but it did not work.
How can I increase the pool size of the socket?
The inbound channel into the WebSocket can be overwritten by using this method:
#Override
public void configureClientInboundChannel(ChannelRegistration registration) {
registration.taskExecutor().corePoolSize(4);
registration.taskExecutor().maxPoolSize(4)
}
The official documentation suggests to have a pool size = number of cores. For sure, since the maxPoolSize is reached then requests are handled through an internal queue. So, given this configuration I can process concurrently 4 requests.

API times out due to limit and offset

I'm trying to pull items from one of our client's Podio instance. It has around 35k records, so I'm setting a limit and offset of 500
loop do
current_offset_value = offset.next
puts "LIMIT: #{LIMIT}, OFFSET: #{current_offset_value}"
Podio::Item.find_by_filter_id(app_id, view_id, limit: LIMIT, remember: true, offset: offset.next).all.each do |item|
yield item
end
end
However, the code just hangs after the first two calls and returns a timeout error
LIMIT: 500, OFFSET: 0
LIMIT: 500, OFFSET: 500
creates a CSV file from a table (FAILED - 1)
Failures:
SourceTableSync::LocalCsvDumper::CitrixPodio creates a CSV file from a table
Failure/Error:
Podio::Item.find_by_filter_id(app_id, view_id, limit: LIMIT, remember: true, offset: current_offset_value).all.each do |item|
yield item
end
Faraday::TimeoutError:
Net::ReadTimeout with #<TCPSocket:(closed)>
Looks like Podio API is timing out while getting 500 items, probably your items are big or has some relationship to other apps and it simply takes too much time to fetch it all.
I'd try some smaller number (e.g. 100 or 200) to see if that will work :)

.Net Core Hangfire - Increase worker count

I have a .Net Core application with Hangfire implementation.
There is a recurring job per minute as below:-
RecurringJob.AddOrUpdate<IS2SScheduledJobs>(x => x.ProcessInput(), Cron.MinuteInterval(1));
var hangfireOptions = new BackgroundJobServerOptions
{
WorkerCount = 20,
};
_server = new BackgroundJobServer(hangfireOptions);
The ProcessInput() internally checks the BlockingCollection() of some Ids to process, it keeps on continuously processing.
There is a time when the first ten ProcessInput() jobs keeps on processing with 10 workers where as the other new ProcessInput() jobs gets enqueued.
For this purpose, I wanted to increase the workers count, say around 50, so that there would be 50 ProcessInput() jobs being processed parallely.
Please suggest.
Thanks.
In the .NET Core version you can set the worker count when adding the Hangfire Server:
services.AddHangfireServer(options => options.WorkerCount = 50);
you can try increasing the worker process using this method. You have to know the number of processors running on your servers. To get 100 workers, you can try
var options = new BackgroundJobServerOptions { WorkerCount = Environment.ProcessorCount * 25 };
app.UseHangfireServer(options);
Source

how to view the number of task in the current queue?

i get a index rejected exception when concurrent index document operation.
rejected execution of org.elasticsearch.transport.TcpTransport$RequestHandler#6d1cb827
on EsThreadPoolExecutor
[
index,
queue capacity = 200,
org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor#5d6lae0c
[
Running,
pool size = 32,
active threads = 32,
queued tasks = 312,
completed tasks = 32541513
]
]
i try to visit the url, but the index queue field is aways 0.
/_cat/thread_pool?v&h=id,type,name,size,largest,active,queue_size,queue
question 1: the queue capacity is 200, why is the munber of tasks in queue is 312 (over 200) ?
question 2: how to view the number of task in the current queue?

Starting Activity Indicator while Running a database download in a background thread

I am running a database download in a background thread. The threads work fine and I execute group wait before continuing.
The problem I have is that I need to start an activity indicator and it seems that due to the group_wait it gets blocked.
Is there a way to run such heavy process, ensure that all threads get completed while allowing the activity indicator to run?
I start the activity indicator with (I also tried starting the indicator w/o the dispatch_async):
dispatch_async(dispatch_get_main_queue(), {
activityIndicator.startAnimating()
})
After which, I start the thread group:
let group: dispatch_group_t = dispatch_group_create()
let queue: dispatch_queue_t = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0) //also tried QOS_CLASS_BACKGROUND
while iter > 0 {
iter--
dispatch_group_enter(group)
dispatch_group_async(group, queue, {
do {
print("in queue \(iter)")
temp += try query.findObjects()
query.skip += query.limit
} catch let error as NSError {
print("Fetch failed: \(error.localizedDescription)")
}
dispatch_group_leave(group)
})
}
// Wait for all threads to finish and proceed
As I am using Parse, I have modified the code as follows (psuedo code for simplicity):
trigger the activity indicator with startAnimating()
call the function that hits Parse
set an observer in the Parse class on an int to trigger an action when the value reaches 0
get count of new objects in Parse
calculate how many loop iterations I need to pull all the data (using max objects per query = 1000 which is Parse max)
while iterations > 0 {
create a Parse query object
set the query skip value
use query.findObjectsInBackroundWithBlock ({
pull objects and add to a temp array
observer--
)}
iterations--
}
When the observer hits 0, trigger a delegate to return to the caller
Works like a charm.

Resources