How can I execute long running background code without monopolizing a Goroutine thread? - go

I have a CPU-bound Go service that receives a high volume of time-sensitive work. As work is performed, data is pushed to a queue to be periodically processed in the background. The processing is low priority, performed by an external package, and can take a long time.
This background processing is causing a problem, because it's not really happening in the background: it's consuming an entire Goroutine thread and forcing the service to run at reduced capacity, which slows down the rate it can process work at.
There are obviously solutions like performing the background work out-of-process, but this would add an unacceptable level of complexity to the service.
Given that the background processing code isn't mine and I can't add yields, is there any way to prevent it from hogging an entire Goroutine thread?

your server maybe call producer ,background processing call consumer
consumer running in other machine
consumer is a single progress? if yes limit cpu、mem

Related

Continuously running code in Win32 app

I have a working GUI and now need to add some code that will need to run continuously and update the GUI with data. Where should this code go? I know that it should not go into the message loop because it might block incoming messages to the window, but I'm confused on where in my window process this code could run.
You have a choice: you can use a thread and post messages back to the main thread to update the GUI (or update the GUI directly, but don't try this if you used MFC), or you can use a timer that will post you messages periodically, you then simply implement a handler for the timer and do whatever you need to there.
The thread is best for a complicated, slow process that might block. If the process of getting data is quick (and/or can be set to timeout on error) then a timer is simpler.
Have you looked into threading at all?
Typically, you would create one thread that performs the background task (in this case, reading the voltage data) and storing it into a shared buffer. The GUI thread simply reads that buffer every so often (on redraw, every 30 seconds, when the user clicks refresh, etc) and displays the data.
Your background thread runs on its own schedule, getting CPU time from the OS, and is not bound to the UI or message pump. It can use some type of timer to monitor the data source and read things in as necessary.
Now, since the threads run separately and may run at the same time, you need to make them aware of one another. This can be done with locks (look into mutexes). For example:
The monitor reads the current voltage and stores it in the buffer.
The background/monitor thread locks the buffer holding the latest sample.
The monitor copies the internal buffer to the shared one.
The monitor unlocks the buffer.
Simultaneously, but separately, the UI thread:
Gets a redraw call.
Waits for the buffer to be unlocked, then reads the value.
Draws the UI with the buffer value.
Setting up a new thread and using it, in most Windows GUI-producing languages, is pretty simple. C/++ and C# both have very simple APIs for creating a new thread and having it work on some task, you usually just need to provide a function for the thread to process. See the MSDN docs on CreateThread for a C example.
The concept of threading and locking is for the most part language-agnostic, and similar in most C-inspired languages. You'll need to have your main (in this case, probably UI) thread control the lifetime of the worker: start the worker after the UI is created, and kill it before the UI is shut down.
This approach has a little bit of overhead up front, especially if your data fetch is very simple. If your data source changes (a network request, some blocking data source, reading over actual wires from a physical sensor, etc) then you only need to change the monitor thread and the UI doesn't need to know.

Handling if there is not enough memory available to start this thread c#

I have a system which starts a new thread with each request to the application.
if application received hundreds of requests there may be not enough memory available to start a new thread so it will throw an exception.
I would like to know an ideal mechanism to handle this kind of a situation.
like, if application is receiving lots of request then if there is not enough memory or number of active threads reached the max then i will delay processing other requests.
but i have no idea how to implement this .
Easy solution: Increase thread-pool limits. This is actually viable although out of fashion these days.
More thorough solution: Use a SemaphoreSlim to limit the number of concurrently asynchronously active requests. Make sure to wait asynchronously. If you wait synchronously you'll again burn a thread while waiting. After having waited asynchronously you can resume normal synchronous blocking processing. This requires only small code changes.
Most thorough solution: Implement your processing fully async. That way you never run out of threads.

Erlang "system" memory section keeps growing

I have an application with the following pattern:
2 long running processes that go into hibernate after some idle time
and their memory consumption goes down as expected
N (0 < N < 100) worker processes that do some work and hibernate when idle more than
10 seconds or terminate if idle more than two hours
during the night,
when there is no activity the process memory goes back to almost the
same value that was at the application start, which is expected as
all the workers have died.
The issue is that "system" section keeps growing (around 1GB/week).
My question is how can I debug what is stored there or who's allocating memory in that area and is not freeing it.
I've already tested lists:keysearch/3 and it doesn't seem to leak memory, as that is the only native thing I'm using (no ports, no drivers, no NIFs, no BIFs, nothing). Erlang version is R15B03.
Here is the current erlang:memory() output (slight traffic, app started on Feb 03):
[{total,378865650},
{processes,100727351},
{processes_used,100489511},
{system,278138299},
{atom,1123505},
{atom_used,1106100},
{binary,4493504},
{code,7960564},
{ets,489944},
{maximum,402598426}]
This is a 64-bit system. As you can see, "system" section has ~270MB and "processes" is at around 100MB (that drops down to ~16MB during the night).
It seems that I've found the issue.
I have a "process_killer" gen_server where processes can subscribe for periodic GC or kill. Its subscribe functions are called on each message received by some processes to postpone the GC/kill (something like re-arm).
This process performs an erlang:monitor if not already monitored to catch a dead process and remove it from watch list. If I comment our the re-subscription line on each handled message, "system" area seems to behave normally. That means it is a bug in my process_killer that does leak monitor refs (remember you can call erlang:monitor multiple times and each call creates a reference).
I was lead to this idea because I've tested a simple module which was calling erlang:monitor in a loop and I have seen ~13 bytes "system" area grow on each call.
The workers themselves were OK because they would die anyway taking their monitors along with them. There is one long running (starts with the app, stops with the app) process that dispatches all the messages to the workers that was calling GC re-arm on each received message, so we're talking about tens of thousands of monitors spawned per hour and never released.
I'm writing this answer here for future reference.
TL;DR; make sure you are not leaking monitor refs on a long running process.

Question about message loop

I have a question haunting me for a long time.
Short version:
What's the working paradigm of Windows Message Loop?
Detailed version:
When we start a Windows application (not a console application), we can interact with it through mouse or keyboard. The application retrieve all kinds of messages representing our movements from its meesage queue. And it is Windows that is responsible for collecting our actions and properly feeding messages into this queue. But doesn't this scenario mean that Windows has to run infinitively?
I think the Windows scheduler should be running all the time. It could possibly be invoked by a time interrupt at a pre-defined interval. When the scheduler is trigged by the time interrupt, it swithes current thread for the next pending thread. A single thread can only get its message with GetMessage() when it is scheduled to run.
I am wondering if there's only one Windows application running, will this application got more chance to get its message?
Update - 1 (9:59 AM 11/22/2010)
Here is my latest finding:
According to < Windows via C/C++ 5th Edition > Chapter 7 Section: Thread Priorities
...For example, if your process'
primary thread calls GetMessage() and
the system sees that no messages are
pending, the system suspends your
porcess' thread, relinquishes the
remainder of the thread's time slice,
and immediately assigns the CPU to
another waiting thread.
If no messages show up for GetMessage
to retrieve, the process' primary
thread stays suspended and is never
assigned to a CPU. However, when a
message is placed in the thread's
queue, the system knows that the
thread should no longer be suspended
and assigns the thread to a CPU if no
higher-priority threads need to
execute.
My current understanding is:
In order for the system to know when a message is placed in a thread's queue, I can think of 2 possible approaches:
1 - Centralized approach: It is the system who is responsible to always check EVERY thread's queue. Even that thread is blocked for the lacking of messages. If any message is availabe, the system will change the state of that thread to schedulable. But this checking could be a real burden to the system in my opinion.
2 - Distributed approach: The system doesn't check every thread's queue. When a thread calls GetMessage and find that no message is available, the system will just change the thread's state to blocked, thus not schedulable any more. And in the future no matter who places a message into a blocked thread's queue, it is this "who"(not the system) that is responsible to change the the thread's state from blocked to ready (or whatever state). So this thread is dis-qualified for scheduling by the system and re-qualified by someone else in the regard of GetMessage. What the system cares is just to schedule the runable threads. The system doesn't care where these schedulable threads come from. This approach will avoid the burden in approach 1, and thus avoid the possible bottleneck.
In fact, the key point here is, how are the states of the threads changed? I am not sure if it is really a distributed paradigm as shown in appraoch 2, but could it be a good option?
Applications call GetMessage() in their message loop. If the message queue is empty, the process will just block until another message becomes available. Thus, GetMessage is a processes' way of telling Windows that it doesn't have anything to do at the moment.
I am wondering if there's only one
Windows application running, will this
application got more chance to get its
message?
Well yeah probably, but I think you might be missing a crucial point. Extracting a message from the queue is a blocking call. The data structure used is usually referred to as a blocking queue. The dequeue operation is designed to voluntarily yield the current thread's execution if the queue is empty. Threads can stay parked using a various different methods, but it is likely that thread remains in a waiting state using kernel level mechanisms in this case. Once the signal is given that the queue has items available the thread may go into a ready state and the scheduler will start assigning its fair share of the CPU. In other words, if there are no messages pending for that application then it just sits there in an idle state consuming close to zero CPU time.
The fewer threads you have running (time slices are scheduled to threads, not processes), the more chances any single application will have to pull messages from its queue. Actually, this has nothing to do with Windows messages; it's true for all multithreading; the more threads of the same or higher priority which are running, the fewer time slices any thread will get.
Beyond that, I'm not sure what you are really asking, though...

Clarifying... So Background Jobs don't Tie Up Application Resources (in Rails)?

I'm trying to get a better grasp of the inner workings of background jobs and how they improve performance.
I understand that the goal is to have the application return a response to the user as fast as it can, so you don't want to, say, parse a huge feed that would take 10 seconds because it would prevent the application from being able to process any other requests.
So it's recommended to put any operations that take more than say 500ms to execute, into a queued background job.
What I don't understand is, doesn't that just delay the same problem? I know the user who invoked that background job will get an immediate response, but what if another user comes right when that background job starts (and it takes 10 seconds to finish), wont that user have to wait?
Or is the main issue that, requests are the only thing that can happen one-at-a-time, while on the other hand a request can start while one+ background jobs are in the middle of running?
Is that correct?
The idea of a background process is that it takes care of all the long running processes.
Basically, it is an external application that is running outside of the webserver with one or several processes that handles the requests.
So, it doesn't matter if there is another user requesting a page since it the job is not occupying the webserver, the user will not have to wait for anything to finish.
If that user also do something that is being put in the background queue, then it will just stack up there until the first one is finished (or in the case where there are multiple processes handling it, as soon as there is one available).
Hope this explanation makes it a bit more clearer :)

Resources