How ought I limit Thread creation using Concurrent-Ruby? - ruby

I have a process which uses the concurrent-ruby gem to handle a large number of API calls concurrently using Concurrent::Future.execute, and, after some time, it dies:
ERROR -- : can't create Thread (11) (ThreadError)
/current/vendor/bundler_gems/ruby/2.0.0/bundler/gems/concurrent-ruby-cba3702c4e1e/lib/concurrent/executor/ruby_thread_pool_executor.rb:280:in `initialize'
Is there a simple way I can tell Concurrent to limit the number of threads it spawns, given I have no way of knowing in advance just how many API calls it's going to need to make?
Or is this something I need to code for explicitly in my app?
I am using Ruby 2.0.0 (alas don't currently have the option to change that)

After some reading and some trial and error I have worked out the following solution. Posting here in case it helps others.
You control the way Concurrent uses threads by specifying a RubyThreadPoolExecutor1
So, in my case the code looks like:
threadPool = Concurrent::ThreadPoolExecutor.new(
min_threads: [2, Concurrent.processor_count].min,
max_threads: [2, Concurrent.processor_count].max,
max_queue: [2, Concurrent.processor_count].max * 5,
overflow_policy: :caller_runs
)
result_things = massive_list_of_things.map do |thing|
(Concurrent::Future.new executor: threadPool do
expensive_api_call using: thing
end).execute
end
So on my laptop I have 4 processors so this way it will use between 2 and 4 threads and allow up to 20 threads in the queue before forcing the execution to use the calling thread. As threads free up the Concurrency library will reallocate them.
Choosing the right multiplier for the max_queue value looks like being a matter of trial and error however; but 5 is a reasonable guess.
1 The actual docs describe a different way to do this but the actual code disagrees with the docs, so the code I have presented here is based on what actually works.

The typical answer to this is to create a Thread pool.
Create a finite number of threads, have a way of recording which are active and which aren't. When a thread finishes an API call, mark it as inactive, so the next call can be handled by it.
The gem you're using already has thread pools.

Related

Parallel Processing with Starting New Task - front end screen timeout

I am running an ABAP program to work with a huge amount of data. The SAP documentation gives the information that I should use
Remote Function Modules with the addition STARTING NEW TASK to process the data.
So my program first selects all the data, breaks the data into packages and calls a function module with a package of data for further processing.
So that's my pseudo code:
Select KEYFIELD from MYSAP_TABLE into table KEY_TABLE package size 500.
append KEY_TABLE to ALL_KEYS_TABLE.
Endselect.
Loop at ALL_KEYS_TABLE assigning <fs_table> .
call function 'Z_MASS_PROCESSING'
starting new TASK 'TEST' destination in group default
exporting
IT_DATA = <fs_table> .
Endloop .
But I am surprised to see that I am using Dialog Processes instead of Background Process for the call of my function module.
So now I encountered the problem that one of my Dialog Processes were killed after 60 Minutes because of Timeout.
For me, it seems that STARTING NEW TASK is not the right solution for parallel processing of mass data.
What will be the alternative?
As already mentioned, thats not an easy topic that is handled with a few lines of codes. The general steps you have to conduct in a thoughtful way to gain the desired benefit is:
1) Get free work processes available for parallel processing
2) Slice your data in packages to be processed
3) Call an RFC enabled function module asynchronously for each package with the available work processes. Handle waiting for free work processes, if packages > available processes
4) Receive your results asynchronously
5) Wait till everything is processed and merge the data together again and assure that every package was handled properly
Although it is bad practice to just post links, the code is very long and would make this answer very messy, therfore take a look at the following links:
Example1-aRFC
Example2-aRFC
Example3-aRFC
Other RFC variants (e.g. qRFC, tRFC etc.) can be found here with short description but sadly cannot give you further insight on them.
EDIT:
Regarding process type of aRFC:
In parallel processing, a job step is started as usual in a background
processing work process. (...)While the job itself runs in a
background process, the parallel processing tasks that it starts run
in dialog work processes. Such dialog work processes may be located on
any SAP server.
The server is specified with the GROUP (default: parallel_generators) see transaction RZ12 and can have its own ressources just for parallel processing. If your process times out, you have to slice your packages differently in size.
I think, best way for parallel processing in SAP is Bank Parallel Processing framework as Jagger mentioned. Unfortunently its rarerly mentioned in any resource and its not documented well.
Actually, best documentation I found was in this book
https://www.sap-press.com/abap-performance-tuning_2092/
Yes, it's tricky. It costed me about 5 or 6 days to force it going. But results were good.
All stuff is situated in package BANK_PP_JOBCTRL and you can use its name for googling.
Main idea there is to divide all your work into steps (simplified):
Preparation
Parallel processing
2.1. Processing preparation
2.2. Processing
(Actually there are more steps there)
First step is not paralleized. Here you should prepare all you data for parallel processing and devide it into 'piece' which will be processed in parallel.
Content of pieces, in turn, can be ID or preloaded data as well.
After that, you can run step 2 in parallel processing.
Great benefit of all this is that error in one piece of parallel work won't lead to crash of all your processing.
I recomend you check demo in function group BANK_API_PP_DEMO
To implement parallel processing, you need to do a bit more than just add that clause. The information is contained in this help topic. A lot of design effort needs to be devoted to ensure that the communication and result merging overhead of the parallel processing does not negate the performance advantage gained by the parallel processing in the first place and that referential integrity of the data is maintained even when some of the parallel tasks fail. Do not under-estimate the complexity of this task.
You could make use of the bgRFC technique. This is a new method of background processing made by SAP.
BgRFC has, in addition to the already existing IN BACKGROUND TASK, the possibility to configure and monitor all calls which run through this method.
You can read more documentation between the different possibilities here. This is all (of course) depending on your SAP version.

How can I manage multiple threads in Ruby on Rails (equivalent of a Java thread factory)?

I'm using Ruby on Rails 5 although I'm fairly a novice to Ruby/Rails. I have read about creating threads using
t = Thread.new {
sleep(rand(0)/10.0)
Thread.current["mycount"] = count
count += 1
}
However, I'm wondering if there is a standard way of managing a bunch of application-created threads in Ruby/Rails. I'm familiar with Java, which has a thread factory. That allows a certain number of threads to concurrently run while others must wait in a queue. I'm wondering how I would do something similar in Ruby/Rails.
Note that I'm not talking about the types of threads that are generated automatically when a web page is requested. I'm talking about threads that I (the application owner) creates.
I think https://github.com/ruby-concurrency/concurrent-ruby is the most used library with utils for concurrency in ruby.
It has a lot of useful things including Thread Pools (http://ruby-concurrency.github.io/concurrent-ruby/file.thread_pools.html) which I think is what you are looking for.
Keep in mind that MRI Ruby has GIL, so you only have parallelism if your threads are waiting for IO. For heavy computation you might want to use jRuby or look elsewhere :)
Especially if you're using ActiveRecord inside your threads, you need to be careful about concurrency issues with for example database connections leaking.
Usually if you want to launch a new thread, you want to do something asynchronously in the background without having the user's request hang on an expensive action. For this there are some great libraries like Sucker Punch and Sidekiq. I'd recommend using one of these instead of creating and managing threads manually.
Hope this helps

How to run multiple threads at the same time in ruby while working with a file?

I've been messing around with Ruby and threading a little bit today. I have a list of proxies that I want to check. Assuming a timeout of 10 seconds going through a very large list of proxies will take many hours if I write something that goes like:
proxies.each do |proxy|
check_proxy(proxy)
end
My first problem with trying to figure out threads is how to START multiple at the same exact time. I found a neat little snippet of code online:
for page in pages
threads << Thread.new(page) { |myPage|
puts "Fetching: #{myPage}\n"
doc = Hpricot(open(myPage.to_s)).to_s
puts "Got #{myPage}: #{doc.size}"
}
end
Seems to work nicely as far as starting them all at the same time. So now I can... start checking all 7 thousand records at the same time?
How do I go to a file, take out a line for each thread, run a batch of like 20 and repeat the process?
Can I run a while loop that in turn starts 20 threads at the same (which remove lines from a file) and keeps going until the file is blank?
I'm a little weak on the logic of what I'm supposed to do.
Thanks guys!
PS.
Another thought: Will there be file access issues if 20 workers are constantly messing with it randomly? What would be a good way around that if this is so?
The keyword you are after is threadpool. You can either try to find one for Ruby (I am sure there's couple at least on Github), or roll your own.
Here's a simple implementation here on SO.
Re: the file access, IMO you shouldn't let workers alter the file directly, but do it in your main thread. You don't want to allow simultaneous edits there.
Try to use gem DelayJob:
https://github.com/tobi/delayed_job
You don't need to generate that many Threads in order to do this work. In fact generating a lot of Threads can decrease the overall performance of your application. If you handle checking each proxy asynchronously, without blocking, you can get by with far fewer threads.
You'd create a file manager thread to process the file. Each line gets added as a request to an array(request queue). On the other end of the request queue you can use eventmachine to send the requests without blocking. eventmachine would also be used to receive the responses and handle the timeout. The response can then be placed on another array(response queue) which your file manager thread polls. The file manager thread pulls the responses from the response queue and resolves if the proxy exists or not.
This gets you down to just creating two threads. One issue that you will have is limiting the number of requests that have been sent since this model will be able to send out all of the requests in less than a second and flood the nearest router. In my experience you should be able to have around 500 outstanding requests at any one time.
There is more than one way to solve this problem asynchronously but hopefully the above is enough to help get you started with non-blocking I/O.

WP7 Max HTTPWebRequests

This is kind of a 2 part question
1) Is there a max number of HttpWebRequests that can be run at the same time in WP7?
I'm going to create a ScheduledTaskAgent to run a PeriodicTask. There will be 2 different REST service calls the first one will get a list of IDs for records that need to be downloaded, the second service will be used to download those records one at a time. I don't know how many records there will be my guestimage would be +-50.
2.) Would making all the individual record requests at once be a bad idea? (assuming that its possible) or should I wait for a request to finish before starting another?
Having just spent a week and a half working at getting a BackgroundAgent to stay within it's memory limits, I would suggest doing them one at a time.
You lose about half your memory to system libraries and the like, your first web request will take another nearly 20%, but it seems to reuse that memory on subsequent requests.
If you need to store the results into a local database, it is going to take a good chunk more. I have found a CompiledQuery uses less memory, which means holding a single instance of your context.
Between each call I would suggest doing a GC.Collect(), I even add a short Thread.Sleep() just to be sure the process has some time to tidying things up.
Another thing I do is track how much memory I am using and attempt to exit gracefully when I get to around 97 or 98%.
You can not use the debugger to test memory limits as the debug memory is much higher and the limits are not enforced. However, for comparative testing between versions of your code, the debugger does produce very similar result on subsequent runs over the same code.
You can track your memory usage with Microsoft.Phone.Info.DeviceStatus.ApplicationCurrentMemoryUsage and Microsoft.Phone.Info.DeviceStatus.ApplicationMemoryUsageLimit
I write a status log into IsolatedStorage so I can see the result of runs on the phone and use ScheduledActionService.LaunchForTest() to kick the off. I then use ShellToast notifications to let me know when the task runs and also when it completes, that way I can launch my app to read the status log without interrupting it.
Tyler,
My 2 cents here.
I don't believe there is any restriction on how mant HTTPWebequests you can spin up. These however have to be async, off course, and may be served from the browser stack. Most modern browsers including IE9 handle over 5 concurrently to the same domain; but you are not guaranteed a request handle immediately. However, it should not matter if you are willing to wait on a separate thread, dump your content on to the request pipe & wait for response on yet another thread. This post (here) has a nice walkthrough of why we need to do this.
Nothing wrong with this approach either, IMO. You're just going to have to wait until all the requests have their respective pipelines & then wait for the responses.
Thanks!
1) Your memory limit in a PeriodicTask or ResourceIntensiveTask is 5 MB. So you definitely should control your requests really careful. I dont think there is a limit in the code.
2)You have only 5 MB. So when you start all your requests at the same time it will terminate immediately.
3) I think you should better use a ResourceIntensiveTask because a PeriodicTask should only run 15 seconds.
Good guide for Multitasking features in Mango: http://blogs.infosupport.com/blogs/alexb/archive/2011/05/26/multi-tasking-in-windows-phone-7-1.aspx
I seem to remember (but can't find the reference right now) that the maximum number of requests that the OS can make at once is 7. You should avoid making this many at once though as it will stop other/system apps from being able to make requests.

Forcing context switch in Windows

Is there a way to force a context switch in C++ to a specific thread, assuming I have the thread handle or thread ID?
No, you won't be able to force operating system to run the thread you want. You can use yield to force a context switch though...
yield in Win32 API is function SwitchToThread. If there is no other thread available for running, then a ZERO value will be returned and current thread will keep running anyway.
You can only encourage the Windows thread scheduler to pick a certain thread, you can't force it. You do so first by making the thread block on a synchronization object and signaling it. Secondary by bumping up its priority.
Explicit context switching is supported, you'll have to use fibers. Review SwitchToFiber(). A fiber is not a thread by a long shot, it is similar to a co-routine of old. Fibers' heyday has come and gone, they are not competitive with threads anymore. They have very crappy cpu cache locality and cannot take advantage of multiple cores.
The only way to force a particular thread to run is by using process/thread affinity, but I can't imagine ever having a problem for which this was a reasonable solution.
The only way to force a context switch is to force a thread onto a different processor using affinity.
In other words, what you are trying to do isn't really viable.
Calling SwitchToThread() will result in a context switch if there is another thread ready to run that are eligible to run on this processor. The documentation states it as follows:
If calling the SwitchToThread function
causes the operating system to switch
execution to another thread, the
return value is nonzero.
If there are no other threads ready to
execute, the operating system does not
switch execution to another thread,
and the return value is zero.
You can temporarily bump the priority of the other thread, while looping with Sleep(0) calls: this passes control to other threads. Suppose that the other thread has increased a lock variable and you need to wait until it becomes zero again:
// Wait until other thread releases lock
SetThreadPriority(otherThread, THREAD_PRIORITY_HIGHER);
while (InterlockedRead(&lock) != 0)
Sleep(0);
SetThreadPriority(otherThread, THREAD_PRIORITY_NORMAL);
I would check out the book Concurrent Programming for Windows. The scheduler seems to do a few things worth noting.
Sleep(0) only yields to higher priority threads (or possibly others at the same priority). This means you cannot fix priority inversion situations with just a Sleep(0), where other lower priority threads need to run. You must use SwitchToThread, Sleep a non-zero duration, or fully block on some kernel HANDLE.
You can create two synchronization objects (such as two events) and use the API SignalObjectAndWait.
If the hObjectToWaitOn is non-signaled and your other thread is waiting on the hObjectToSignal, the OS can theoretically perform quick context switch inside this API, before end of time slice.
And if you want the current thread to automatically resume, simply inform a small value (such as 50 or 100) on the dwMilliseconds.

Resources