I imagine it is related to the size of the parameters in a scheduled job and the amount of RAM available to Redis. Even an approximate estimation would suffice.
Assuming:
When Queue Job Arguments
tomorrow idempotent_critical AddSomeStuff 74, 75, "cos ur rly only after"
Would it be reasonable to count the characters on the second line and say 76 chars so one million enqueued similar jobs will occupy ~73MiB? (Thus the limit for a 8GiB of RAM machine is about 100 million of those jobs)
Clarification: What is a rough equation for calculating the max number of jobs sidekiq can schedule?
Use redis-cli info to get Redis memory info before.
Create one million scheduled jobs.
Use redis-cli info to get Redis memory info after.
Extrapolate.
Since Sidekiq uses Redis lists https://redis.io/topics/data-types#lists to store jobs the max number would be 4 billion. I suspect you would hit RAM and other limitations before that.
Related
I am running a spark job with input file of size 6.6G (hdfs) with master as local. My Spark Job with 53 partitions completed quickly when I assign local[6] than local[2], however the individual task takes more computation time when number of cores are more. Say if I assign 1 core(local[1]) then each task takes 3 secs where the same goes up to 12 seconds if I assign 6 cores (local[6]). Where the time gets wasted? The spark UI shows increase in computation time for each task in local[6] case, I couldn't understand the reason why the same code takes different computation time when more cores are assigned.
Update:
I could see more %iowait in iostat output if I use local[6] than local[1]. Please let me know this is the only reason or any possible reasons. I wonder why this iowait is not reported in sparkUI. I see the increase in computing time than iowait time.
I am assuming you are referring to spark.task.cpus and not spark.cores.max
With spark.tasks.cpus each task get assigned more cores, but it doesn't necessarily have to use them. If you process is single threaded it really can't use them. You wind up with additional overhead without additional benefit and those cores are taken away from other single threaded tasks that can use them.
With spark.cores.max it is simply and overhead issue with transferring data around at the same time.
I am new to sidekiq, my requirement is that there can be as many high priority jobs as the number of users logged into the system. Lets sat each user is expecting a notification soon as his job is processed.
I have one sidekiq daemon running with concurrency of 50 so at a time I can have just 50 jobs processing? I have read that the wiki states we should have multiple sidekiqs running.
What is the upper limit on the number of sidekiqs to run?
how will I be able to match the number of users logged in with the number of concurrent workers?
Is there a technology stack I can use to launch these workers? Something like unicorn to have a pool of workers? Can i even use unicorn with sidekiq ?
What is the upper limit on the number of sidekiqs to run?
You will want a max of one Sidekiq per processor core. If you have a dual-core processor, then 2 Sidekiqs. However, if your server is also doing other stuff such as running a webserver, you will want to leave some cores available for that.
how will I be able to match the number of users logged in with the number of concurrent workers?
With Sidekiq, you pre-emptively create your threads. You essentially have a thread-pool of X idle threads which are ready to deploy at any moment should a huge surge of jobs come in. You will need to create as many threads as the max number of jobs you think you will have at any time. However going over 50 threads per core is not a good idea for performance reasons (the amount of time switching between a huge number of threads significantly cuts into the CPU time allocated for the threads to do actual work).
Is there a technology stack I can use to launch these workers? Something like unicorn to have a pool of workers? Can i even use unicorn with sidekiq ?
You can't use Unicorn for this. You need some process supervisor to handle starting/restarting of Sidekiq. Their wiki recommends Upstart or systemd, but I've found that Supervisor works incredibly well, and is really easy to set-up.
I'm seeing strange behavior when scaling Delayed::Job workers on Heroku.
I have a few thousand jobs that are all basically identical. When I assign 1 worker dyno to that queue, each job completes in about 4s.
When I scale the number of workers to 2, processing time averages 8s per job
When I scale the number of workers to 10, average processing time per job increases to above 30s per job.
I would not expect processing time per job to increase when scaling the number of workers.
As it is currently behaving, there is no way to scale up the number of workers to "churn through" a backlog of jobs, as the increase in processing time offsets any gains in having more workers.
Has anyone else seen this behavior and (more importantly) know how to resolve the issue?
Do you have any metrics on the database processing time? Seems possible that the bottleneck could be in the database engine and so no matter how many workers you have, you'd still be locked up there...
Running a 12-node hadoop cluster with total 48 map-slots available. Submitting bunch of jobs, but never see all map slots being utilized. Maximum number of busy slots is floating around 30-35, but never close to 48. Why?
Here's the configuration of fairscheduler.
<?xml version="1.0"?>
<allocations>
<pool name="big">
<minMaps>10</minMaps>
<minReduces>10</minReduces>
<maxRunningJobs>3</maxRunningJobs>
</pool>
<pool name="medium">
<minMaps>10</minMaps>
<minReduces>10</minReduces>
<maxRunningJobs>3</maxRunningJobs>
<weight>3.0</weight>
</pool>
<pool name="small">
<minMaps>20</minMaps>
<minReduces>20</minReduces>
<maxRunningJobs>20</maxRunningJobs>
<weight>100.0</weight>
</pool>
</allocations>
The idea is that jobs in small queue should always have a priority, the next important queue is 'medium' and the less important is 'big'. Sometimes I see jobs in medium or big queue starve although there are more map slots available that are not used.
I think that the issue can be caused because the maxRunningJobs option is not taken into account while computing shares for jobs. I think that parameter is handled after slots (from the exceeding job) has been already assigned to a tasktracker. That is happening every n seconds from the UpdateThread.update()-> update Runability() method from FairScheduler class. I suppose that in your case after some time jobs from “medium” and “big” pool gets a bigger deficit than jobs from the “small” pool, that means that the next task will be scheduled from the job in medium or big pool. When the task is scheduled the restriction of maxRunningJobs take place and puts the exceeding jobs into a non runnable state. The same thing appears on the following update.
This is just my guess after looking after some source of fscheduler. If you can I would probably try to remove maxRunningJobs from the config and see how the scheduler behaves without that limitation and if it takes all of your slots..
Weigths for the pools in my oppinion seems to be to high. Weigh of 100 would mean that this pool should get 100x more slots than the default pool. I would try to lower this number by few factors if you want to have fair sharing between your pools. Otherwise jobs from others pools will be launched just when they will meet their deficit (it is calculated from the running tasks and minShare)
Another option why jobs are starving is maybe because of delay scheduling that is included in the fsched with the aim of improving computation locality? This can be probably improved by increasing a repclication factor but I do not think this is your case..
some docs on the fairscheduler..
The starvation probably occurs because the priority of the small pool is really really high (2^100 more than big 2^97 more than medium). When all the jobs are are ordered by priority and you have waiting jobs in the small pool. The next job in that pool needs 20 slots and it has higher priority than anything else so the open slots just wait there until a currently running job will free them. there are no "unneeded slots" to divide to other priorities
see highlights from the implementation notes of the fair schedulere:
"The fair shares are calculated by dividing the capacity of the
cluster among runnable jobs according to a "weight" for each job. By
default the weight is based on priority, with each level of priority
having 2x higher weight than the next (for example, VERY_HIGH has 4x
the weight of NORMAL). However, weights can also be based on job sizes
and ages, as described in the Configuring section. For jobs that are
in a pool, fair shares also take into account the minimum guarantee
for that pool. This capacity is divided among the jobs in that pool
according again to their weights."
Finally, when limits on a user's running jobs or a pool's running jobs
are in place, we choose which jobs get to run by sorting all jobs in
order of priority and then submit time, as in the standard Hadoop
scheduler. Any jobs that fall after the user/pool's limit in this
ordering are queued up and wait idle until they can be run. During
this time, they are ignored from the fair sharing calculations and do
not gain or lose deficit (their fair share is set to zero).
I want to run 50 tasks. All these tasks execute the same piece of code. Only difference will be the data. Which will be completed faster ?
a. Queuing up 50 tasks in a queue
b. Queuing up 5 tasks each in 10 different queue
Is there any ideal number of tasks that can be queued up in 1 queue before using another queue ?
The rate at which tasks are executed depends on two factors: the number of instances your app is running on, and the execution rate of the queue the tasks are on.
The maximum task queue execution rate is now 100 per queue per second, so that's not likely to be a limiting factor - so there's no harm in adding them to the same queue. In any case, sharding between queues for more execution rate is at best a hack. Queues are designed for functional separation, not as a performance measure.
The bursting rate of task queues is controlled by the bucket size. If there is a token in the queue's bucket the task should run immediately. So if you have:
queue:
- name: big_queue
rate: 50/s
bucket_size: 50
And haven't queue any tasks in a second all tasks should start right away.
see http://code.google.com/appengine/docs/python/config/queue.html#Queue_Definitions for more information.
Splitting the tasks into different queues will not improve the response time unless the bucket hadn't had enough time to completely fill with tokens.
I'd add another factor into the mix- concurrency. If you have slow running (more than 30 seconds or so) tasks, then AppEngine seems to struggle to scale up the correct number of instances to deal with the requests (seems to max out about 7-8 for me).
As of SDK 1.4.3, there's a setting in your queue.xml and your appengine-web.config you can use to tell AppEngine that each instance can handle more than one task at a time:
<threadsafe>true</threadsafe> (in appengine-web.xml)
<max-concurrent-requests>10</max-concurrent-requests> (in queue.xml)
This solved all my problems with tasks executing too slowly (despite setting all other queue params to the maximum)
More Details (http://blog.crispyfriedsoftware.com)
Queue up 50 tasks and set your queue to process 10 at a time or whatever you would like if they can run independently of each other. I see a similar problem and I just run 10 tasks at a time to process the 3300 or so that I need to run. It takes 45 minutes or so to process all of them but the CPU time used is negligible surprisingly.