Handling if there is not enough memory available to start this thread c# - windows

I have a system which starts a new thread with each request to the application.
if application received hundreds of requests there may be not enough memory available to start a new thread so it will throw an exception.
I would like to know an ideal mechanism to handle this kind of a situation.
like, if application is receiving lots of request then if there is not enough memory or number of active threads reached the max then i will delay processing other requests.
but i have no idea how to implement this .

Easy solution: Increase thread-pool limits. This is actually viable although out of fashion these days.
More thorough solution: Use a SemaphoreSlim to limit the number of concurrently asynchronously active requests. Make sure to wait asynchronously. If you wait synchronously you'll again burn a thread while waiting. After having waited asynchronously you can resume normal synchronous blocking processing. This requires only small code changes.
Most thorough solution: Implement your processing fully async. That way you never run out of threads.

Related

How can I execute long running background code without monopolizing a Goroutine thread?

I have a CPU-bound Go service that receives a high volume of time-sensitive work. As work is performed, data is pushed to a queue to be periodically processed in the background. The processing is low priority, performed by an external package, and can take a long time.
This background processing is causing a problem, because it's not really happening in the background: it's consuming an entire Goroutine thread and forcing the service to run at reduced capacity, which slows down the rate it can process work at.
There are obviously solutions like performing the background work out-of-process, but this would add an unacceptable level of complexity to the service.
Given that the background processing code isn't mine and I can't add yields, is there any way to prevent it from hogging an entire Goroutine thread?
your server maybe call producer ,background processing call consumer
consumer running in other machine
consumer is a single progress? if yes limit cpu、mem

Multiple Async Http Client Calls in Java is Thread Safe

I am working with Async Http Client in Java with this example . I just want to know about the performance issues when we are calling multiple services at a time asynchronously . I am concerned about the number of CPU cores and the number of threads used in this example. I want to know more about this like :
Is this example uses multiple threads for each request.
If suppose I am making n number of calls at that time CPU cores will not support this to run , how I come to know how many threads it will support based in CPU ?
Is this example is Thread Safe.
Is this example uses multiple threads for each request.
No, it does not
how many threads it will support based in CPU
By default the underlying I/O reactor starts one I/O dispatch thread per CPU core
Is this example is Thread Safe
This question is just too vague. Exactly what class instances are you talking about?
Thread-safety rules that apply to both blocking HttpClient and non-blocking HttpAsyncClient are
clients are thread-safe
connection managers are thread-safe
request / response messages are not thread safe
contexts are not thread safe
As far as HttpAsyncClient is concerned, as long as you do not use additional threads to process / generate content, HttpAsyncClient ensures proper access synchronization of all components involved.

creating delphi timers dynamically at runtime (performance, cpu consuming)

In my current project I have a structure like this:
Main Thread (GUI):
->Parser Thread
->Healer Thread
->Scripts Thread
the problem is that the Healer & Scripts Threads have to create childthreads with their appropiate timer, it would look like this:
->Parser Thread
->Healer Thread:
-->Healer 1
-->Healer 2
--> (...)
->Scripts Thread:
-->Script 1
--> (...)
For doing this I have thought about coding a dynamically Timer which would be created at runtime when a new Heal/Script is added.
Now the problem/question is:
maybe I have like 20 timers runing at the same time because of this, wouldn't this be a problem to my program performance (CPU consuming, etc)?
Is this the best way to achieve what I'm looking for?
Thanks in advance
There's no problem with having up to 20 timers active at one time in an application. Modern hardware is more than capable of handling that.
Remember also that timer messages are low priority messages and so are only synthesised when the message queue is empty. So, you need to keep the message queues of your threads serviced promptly in order for the messages to be delivered in a timely manner.
A bigger problem for you is that you cannot create TTimer instances outside the GUI/VCL thread. That's because the timer component calls AllocateHWnd which is not thread safe and can only be called from the GUI/VCL thread. So, you'll need to interact with the raw Win32 timer API directly and not use the VCL TTimer wrapper.

Detecting whether any thread is waiting for an event

Let's say I have a manual event handle h (created with CreateEvent, manual).
There are several threads in my application, some thread(s) might be waiting for this event (WaitForSingleObject, WaitForMultipleObject).
At certain times in my application, I want to assert that no thread is waiting for this handle h.
Is there a Windows API function that tells me if any thread waiting for event h at that moment in time ?
I don't believe that the Windows API provides any public mechanism for giving out that information (whether or not threads are waiting for a synchronization object). It is something that a typical application should not need to know and would likely result in race conditions if it were provided.
For example, if the application checked to verify that no threads were waiting and then made a decision based on that, it could easily be wrong because a thread may in the very next clock cycle actually start waiting for the event, so the information would be stale and potentially wrong immediately after the check.

Question about message loop

I have a question haunting me for a long time.
Short version:
What's the working paradigm of Windows Message Loop?
Detailed version:
When we start a Windows application (not a console application), we can interact with it through mouse or keyboard. The application retrieve all kinds of messages representing our movements from its meesage queue. And it is Windows that is responsible for collecting our actions and properly feeding messages into this queue. But doesn't this scenario mean that Windows has to run infinitively?
I think the Windows scheduler should be running all the time. It could possibly be invoked by a time interrupt at a pre-defined interval. When the scheduler is trigged by the time interrupt, it swithes current thread for the next pending thread. A single thread can only get its message with GetMessage() when it is scheduled to run.
I am wondering if there's only one Windows application running, will this application got more chance to get its message?
Update - 1 (9:59 AM 11/22/2010)
Here is my latest finding:
According to < Windows via C/C++ 5th Edition > Chapter 7 Section: Thread Priorities
...For example, if your process'
primary thread calls GetMessage() and
the system sees that no messages are
pending, the system suspends your
porcess' thread, relinquishes the
remainder of the thread's time slice,
and immediately assigns the CPU to
another waiting thread.
If no messages show up for GetMessage
to retrieve, the process' primary
thread stays suspended and is never
assigned to a CPU. However, when a
message is placed in the thread's
queue, the system knows that the
thread should no longer be suspended
and assigns the thread to a CPU if no
higher-priority threads need to
execute.
My current understanding is:
In order for the system to know when a message is placed in a thread's queue, I can think of 2 possible approaches:
1 - Centralized approach: It is the system who is responsible to always check EVERY thread's queue. Even that thread is blocked for the lacking of messages. If any message is availabe, the system will change the state of that thread to schedulable. But this checking could be a real burden to the system in my opinion.
2 - Distributed approach: The system doesn't check every thread's queue. When a thread calls GetMessage and find that no message is available, the system will just change the thread's state to blocked, thus not schedulable any more. And in the future no matter who places a message into a blocked thread's queue, it is this "who"(not the system) that is responsible to change the the thread's state from blocked to ready (or whatever state). So this thread is dis-qualified for scheduling by the system and re-qualified by someone else in the regard of GetMessage. What the system cares is just to schedule the runable threads. The system doesn't care where these schedulable threads come from. This approach will avoid the burden in approach 1, and thus avoid the possible bottleneck.
In fact, the key point here is, how are the states of the threads changed? I am not sure if it is really a distributed paradigm as shown in appraoch 2, but could it be a good option?
Applications call GetMessage() in their message loop. If the message queue is empty, the process will just block until another message becomes available. Thus, GetMessage is a processes' way of telling Windows that it doesn't have anything to do at the moment.
I am wondering if there's only one
Windows application running, will this
application got more chance to get its
message?
Well yeah probably, but I think you might be missing a crucial point. Extracting a message from the queue is a blocking call. The data structure used is usually referred to as a blocking queue. The dequeue operation is designed to voluntarily yield the current thread's execution if the queue is empty. Threads can stay parked using a various different methods, but it is likely that thread remains in a waiting state using kernel level mechanisms in this case. Once the signal is given that the queue has items available the thread may go into a ready state and the scheduler will start assigning its fair share of the CPU. In other words, if there are no messages pending for that application then it just sits there in an idle state consuming close to zero CPU time.
The fewer threads you have running (time slices are scheduled to threads, not processes), the more chances any single application will have to pull messages from its queue. Actually, this has nothing to do with Windows messages; it's true for all multithreading; the more threads of the same or higher priority which are running, the fewer time slices any thread will get.
Beyond that, I'm not sure what you are really asking, though...

Resources