slow down for in loop - cocoa

I have a function that loops through a list of items by sending them to a server and grabbing the response. The problem I'm having is the loop is going faster than the server can handle. I need to figure out a way to slow the loop down without freezing the application. Is there a way to delay the loop from moving to the next item for a brief moment? In other languages, I'd use something like sleep(interval).

Don't slow the process down. Add the network calls to an operation queue with a limited number of concurrent operations. You may need to rewrite your network code as an NSOperation subclass but that's fairly straightforward. You can see some examples in this tutorial.
There is a built-in limit to the number of simultaneous network connections that can be made anyway, but it sounds like your server's limit is lower than that, or that you're saturating the network connections and your later calls are timing out before they've been able to start.

Instead of a sleep interval it sounds like you want a completion block that calls the same code again until the list is empty. So once it finishes the request, it goes onto the next one.
Also I don't think you should be trying to sleep since it will hold the main thread which results in a poor user experience.

Related

Timeout heavy server

I need a server to perform lots of timing operations and trigger code accordingly.
So I'll break my wonderings into simple questions:
How are timeout and timeout callback usually work in terms of OS resources and threads?
(say the nodejs setTimeout(callback, delay))
Should I refrain from that and maybe have a timer worker to check every second for timeouts?
e.i. if i need 10 timeouts, keep a collection of all timeout timestamps and check every second if any of them is due.
What would be a good framework / platform to implement this kind of behaviour?
Please comment if you think I'm being unclear,
Thanks in advance.
"Depends"
Most timer implementations are extremely lightweight (i.e. having zillions of timers going at once). A timer core is little more than a priority queue of "things soon to expire", and then expiring them every second.
Things are different when lots of them fire all at once, that is NOT lightweight (as all the real work starts).

without using DoEvents, how to detect if a button has been pressed?

Currently, I call DoEvents in order to check if Button Foo in Form Bar has been clicked. This approach works but it takes too much processing power, delaying the program.
I believe that the delay could be reduced if I could only check if Button Foo has been clicked, instead of all the other forms that DoEvents has to go through.
Any ideas on how can I check if Button Foo was clicked?
VB6 was not really designed for what you seem to be doing (some sort of long-running straight-line code that does not exit back to give the message loop control). Normally such a task would be delegated to a worker thread, and in VB6 this means some external component implemented in C++ most of the time.
There are only a very few kinds of approaches to take to do this for your ad-hoc logic:
Hacks creating separate threads via API calls, not very reliable in VB6 for a number of reasons.
A tricky thread-per-object ActiveX EXE implementing a class to handle your long-running workload.
A separate non-interactive worker process to be run and monitored by your GUI program.
That's pretty much it.
The prescribed method of doing this sort of thing is described in the VB6 documentation. You break your long-running loop up and invert the logic into a repeatable "quantum" of work (like n iterations of your processing loop), and maintain the state of your workload in Form-global data. Then you use a Timer control with its interval set to 1 or 16 (hardly matters, they normally take at least 16ms to trigger) and run your workload quantum within its event handler.
So if you simply had a loop that currently iterates 100,000 times doing something you might break it up so that it runs 500 times for each Timer tick. The quantum size will probably need to be tuned based on what is done within the loop - 500 is just a value chosen for illustration. You'll want to adjust this until it leaves the UI responsive without starving your background workload too much (slowing completion down).
If your code is heavy enough to not call DoEvents or just finish running periodically, then your app won't even know the button has been pressed. The DoEvents call allows windows, and your application to catch up on all notifications.
The correct way to resolve this is a worker thread (see this article on how to do something like this in VB6) but failing that, a periodic DoEvents is required and in turn, some re-entrancy blocking on the call into the long running code.

How to run multiple threads at the same time in ruby while working with a file?

I've been messing around with Ruby and threading a little bit today. I have a list of proxies that I want to check. Assuming a timeout of 10 seconds going through a very large list of proxies will take many hours if I write something that goes like:
proxies.each do |proxy|
check_proxy(proxy)
end
My first problem with trying to figure out threads is how to START multiple at the same exact time. I found a neat little snippet of code online:
for page in pages
threads << Thread.new(page) { |myPage|
puts "Fetching: #{myPage}\n"
doc = Hpricot(open(myPage.to_s)).to_s
puts "Got #{myPage}: #{doc.size}"
}
end
Seems to work nicely as far as starting them all at the same time. So now I can... start checking all 7 thousand records at the same time?
How do I go to a file, take out a line for each thread, run a batch of like 20 and repeat the process?
Can I run a while loop that in turn starts 20 threads at the same (which remove lines from a file) and keeps going until the file is blank?
I'm a little weak on the logic of what I'm supposed to do.
Thanks guys!
PS.
Another thought: Will there be file access issues if 20 workers are constantly messing with it randomly? What would be a good way around that if this is so?
The keyword you are after is threadpool. You can either try to find one for Ruby (I am sure there's couple at least on Github), or roll your own.
Here's a simple implementation here on SO.
Re: the file access, IMO you shouldn't let workers alter the file directly, but do it in your main thread. You don't want to allow simultaneous edits there.
Try to use gem DelayJob:
https://github.com/tobi/delayed_job
You don't need to generate that many Threads in order to do this work. In fact generating a lot of Threads can decrease the overall performance of your application. If you handle checking each proxy asynchronously, without blocking, you can get by with far fewer threads.
You'd create a file manager thread to process the file. Each line gets added as a request to an array(request queue). On the other end of the request queue you can use eventmachine to send the requests without blocking. eventmachine would also be used to receive the responses and handle the timeout. The response can then be placed on another array(response queue) which your file manager thread polls. The file manager thread pulls the responses from the response queue and resolves if the proxy exists or not.
This gets you down to just creating two threads. One issue that you will have is limiting the number of requests that have been sent since this model will be able to send out all of the requests in less than a second and flood the nearest router. In my experience you should be able to have around 500 outstanding requests at any one time.
There is more than one way to solve this problem asynchronously but hopefully the above is enough to help get you started with non-blocking I/O.

WP7 Max HTTPWebRequests

This is kind of a 2 part question
1) Is there a max number of HttpWebRequests that can be run at the same time in WP7?
I'm going to create a ScheduledTaskAgent to run a PeriodicTask. There will be 2 different REST service calls the first one will get a list of IDs for records that need to be downloaded, the second service will be used to download those records one at a time. I don't know how many records there will be my guestimage would be +-50.
2.) Would making all the individual record requests at once be a bad idea? (assuming that its possible) or should I wait for a request to finish before starting another?
Having just spent a week and a half working at getting a BackgroundAgent to stay within it's memory limits, I would suggest doing them one at a time.
You lose about half your memory to system libraries and the like, your first web request will take another nearly 20%, but it seems to reuse that memory on subsequent requests.
If you need to store the results into a local database, it is going to take a good chunk more. I have found a CompiledQuery uses less memory, which means holding a single instance of your context.
Between each call I would suggest doing a GC.Collect(), I even add a short Thread.Sleep() just to be sure the process has some time to tidying things up.
Another thing I do is track how much memory I am using and attempt to exit gracefully when I get to around 97 or 98%.
You can not use the debugger to test memory limits as the debug memory is much higher and the limits are not enforced. However, for comparative testing between versions of your code, the debugger does produce very similar result on subsequent runs over the same code.
You can track your memory usage with Microsoft.Phone.Info.DeviceStatus.ApplicationCurrentMemoryUsage and Microsoft.Phone.Info.DeviceStatus.ApplicationMemoryUsageLimit
I write a status log into IsolatedStorage so I can see the result of runs on the phone and use ScheduledActionService.LaunchForTest() to kick the off. I then use ShellToast notifications to let me know when the task runs and also when it completes, that way I can launch my app to read the status log without interrupting it.
Tyler,
My 2 cents here.
I don't believe there is any restriction on how mant HTTPWebequests you can spin up. These however have to be async, off course, and may be served from the browser stack. Most modern browsers including IE9 handle over 5 concurrently to the same domain; but you are not guaranteed a request handle immediately. However, it should not matter if you are willing to wait on a separate thread, dump your content on to the request pipe & wait for response on yet another thread. This post (here) has a nice walkthrough of why we need to do this.
Nothing wrong with this approach either, IMO. You're just going to have to wait until all the requests have their respective pipelines & then wait for the responses.
Thanks!
1) Your memory limit in a PeriodicTask or ResourceIntensiveTask is 5 MB. So you definitely should control your requests really careful. I dont think there is a limit in the code.
2)You have only 5 MB. So when you start all your requests at the same time it will terminate immediately.
3) I think you should better use a ResourceIntensiveTask because a PeriodicTask should only run 15 seconds.
Good guide for Multitasking features in Mango: http://blogs.infosupport.com/blogs/alexb/archive/2011/05/26/multi-tasking-in-windows-phone-7-1.aspx
I seem to remember (but can't find the reference right now) that the maximum number of requests that the OS can make at once is 7. You should avoid making this many at once though as it will stop other/system apps from being able to make requests.

Clarifying... So Background Jobs don't Tie Up Application Resources (in Rails)?

I'm trying to get a better grasp of the inner workings of background jobs and how they improve performance.
I understand that the goal is to have the application return a response to the user as fast as it can, so you don't want to, say, parse a huge feed that would take 10 seconds because it would prevent the application from being able to process any other requests.
So it's recommended to put any operations that take more than say 500ms to execute, into a queued background job.
What I don't understand is, doesn't that just delay the same problem? I know the user who invoked that background job will get an immediate response, but what if another user comes right when that background job starts (and it takes 10 seconds to finish), wont that user have to wait?
Or is the main issue that, requests are the only thing that can happen one-at-a-time, while on the other hand a request can start while one+ background jobs are in the middle of running?
Is that correct?
The idea of a background process is that it takes care of all the long running processes.
Basically, it is an external application that is running outside of the webserver with one or several processes that handles the requests.
So, it doesn't matter if there is another user requesting a page since it the job is not occupying the webserver, the user will not have to wait for anything to finish.
If that user also do something that is being put in the background queue, then it will just stack up there until the first one is finished (or in the case where there are multiple processes handling it, as soon as there is one available).
Hope this explanation makes it a bit more clearer :)

Resources