Reading up the new Vista/Win2008 features, I wonder what is the point of the Thread Ordering Service. In other words, in which scenario the "classic" scheduler's "fair to all" policy is not sufficient, and a definite order of threads is preferrable?
To clarify. What would be a concrete application that would benefit from it?
Thanks for your answers, though.
The Thread Ordering Service does not apply to all threads, but only to those that are registered to it. You must make your program use the functionality.
The Service ensures that threads are executed in a desirable (configurable) order. That cannot be guaranteed by a "fair for all" scheduler. If your threads have no preferred execution order, the service probably does not provide extra value to you.
The Thread Ordering Service provides cooperative multi-threading in a pre-emptive multi-threading world. When you create the group you specify the maximum time slice that can be used by a thread in the group (period + timeout), and how often to run the group (period).
Your threads will then be run at most once per period, and will get an error if they exceed their maximum time slice.
I imagine this works quite well in scenarios where there's a hard response time limit.
Related
I have a spring integration application and I am using message driver adapter to consume messages from external systems. To handle the messages concurrently I have setup concurrent (5) and maximum concurrent consumers (20) which is working fine.
But for production scenario I wanted to fine tune it further. I just want to understand that if we have any standard suggestion regarding how much we can increase this maximum concurrent consumer to? I understand that this is purely dependent on the application and how much traffic is coming to it but I hope there should be some standard process to figure out this number. If we blindly increase this number to a random value like 1000 than it might lead to resource starvation, conflicts etc so I am trying to understand the process of how to go about fine tuning this property.
Thanks!
There is no standard process as there is no standard performance requirement. It all depends on your SLA and performant system is the one that meets your SLA (as there is no such thing as beats SLA).
The main caveat when it comes to concurrent consumers is the order of messages. Basically once you introduced more then one consumer you can not and should not assume any guarantees of message ordering.
I am running load test with jmeter with selenium webdriver sample. Purpose is to load test and understand amount of time taken by 500 users to complete a survey on a web dash board. When executing i need to control the number of concurrent threads, and it should be more than 10. New thread should be spawned if number of concurrent threads becomes less that 10.
How do i achieve this. Any pointer in this direction will be helpful.
Regards,
Seshan K.
According to this article you can set the amount of threads in the Stepping Thread Group. This might be great to read through.
You must be looking for the Concurrency Thread Group
This thread group offers simplified approach for configuring threads schedule. It is intended to maintain the level of concurrency, which means starting additional during the runtime threads if there's not enough of them running in parallel.
So it is just enough to install the Concurrency Thread Group using JMeter Plugins Manager and use it instead of normal JMeter's Thread Group
We have a list of tasks with different length, a number of cpu cores and a Context Switch time.
We want to find the best scheduling of tasks among the cores to maximize processor utilization.
How could we find this?
Isn't it like if we choose the biggest available tasks from the list and give them one by one to the current ready cores, it's going to be best or you think we must try all orders to find out which is the best?
I must add that all cores are ready at the time unit 0 and the tasks are supposed to work concurrently.
The idea here is that there's no silver bullet, for what you must consider what are the types of tasks being executed, and try to schedule them as nicely as possible.
CPU-bound tasks don't use much communication (I/O), and thus, need to be continuously executed, and interrupted only when necessary -- according to the policy being used;
I/O-bound tasks may be continuously put aside in the execution, allowing other processes to work, since it will be sleeping for many periods, waiting for data to be retrieved to primary memory;
interative tasks must be continuously executed, but needs not to be executed without interruptions, as it will generate interruptions, waiting for user inputs, but it needs to have a high priority, in order not to let the user notice delays in the execution.
Considering this, and the context switch costs, you must evaluate what types of tasks you have, choosing, thus, one or more policies for your scheduler.
Edit:
I thought this was a simply conceptual question. Considering you have to implement a solution, you must analyze the requirements.
Since you have the length of the tasks, and the context switch times, and you have to maintain the cores busy, this becomes an optimization problem, where you must keep the minimal number of cores idle when it reaches the end of the processes, but you need to maintain the minimum number of context switches, so that your overall execution time does not grow too much.
As pointed by svick, this sounds like a partition problem, which is NP-complete, and in which you need to divide a sequence of numbers into a given number of lists, so that the sum of each list is equal to each other.
In your problem you'd have a relaxation on the objective, so that you no longer need all the cores to execute the same amount of time, but you want the difference between any two cores execution time to be as small as possible.
In the reference given by svick, you can see a dynamic programming approach that you may be able to map onto your problem.
Can some one explain what is the purpose of Ultimate Thread Group considering it's practical usage. I am new to JMeter and as I've learned (please correct me if i am wrong), we use "Ultimate Thread Group" to time scheduling the ramp up/down time of multiple threads we create for a particular JMeter scenario script.
I feel this also can be done using Stepping Thread Group as well by having multiple threads attached to the same test plan. So i need to know exactly what is the significant usage of Ultimate Thread Group.
You can use it to simulate a peak of users (as the Dirac delta function)
Based on msdn ,windows os schedule threads based on base prorety and uses as a boost dynamic priorety
The system treats all threads with the same priority as equal. The
system assigns time slices in a round-robin fashion to all threads
with the highest priority. If none of these threads are ready to run,
the system assigns time slices in a round-robin fashion to all threads
with the next highest priority. If a higher-priority thread becomes
available to run, the system ceases to execute the lower-priority
thread (without allowing it to finish using its time slice), and
assigns a full time slice to the higher-priority thread.
From the above quote
The system treats all threads with the same priority as equal
Does it mean that the system treats threads based on dynamic priorety?And base priorety is used just as low limit for dynamic priorety change?
Thank you
Based on msdn ,windows os schedule threads based on base prorety and uses as a boost dynamic
priorety
Well, you follow that with a nice text snipped that has NO SIGN OF A BOOST DYNAMIC PRIORITY.
More information about that is in the documentation - for example http://msdn.microsoft.com/en-us/library/windows/desktop/ms684828(v=vs.85).aspx is a good start.
In simple words, the scheduler schedules threads based on their current priority, and boost priority changes that, so they get scheduled differently.