How to use non blocking MPI-IO to overlap conputation and file operation? - nonblocking

I use MPI_File_iwrite with daos file system(a distribute file system).
MPI_FIle_iwrite();
//computation, time is long enough to cover the write operation)
MPI_wait(); //to check if the MPI_FIle_iwrite is complete
Here the distribute file system (one server node and three storage nodes) can act as a coprocessor, so there is no need to use multi-threads. Since MPI_FIle_iwrite is a nonblocking operation, the IO and the computation should overlap, that is, the MPI_wait() should immediately return. Howerever, MPI_wait() cost a long time (just the same time the write operation should take). Why?
When I use MPI_Test, the MPI_Test also take a long time. Actually it should return immediately no matter if the operation is complete or not.
This problem is similar to mine

Related

performance of running select queries in parallel

(note this isnt about parallel execution of a query inside the RDB, but peformance characteristics of submitting queries in parallel).
I have a process that executes 1000's (if not 10,000s+) of queries, in a single threaded manner (i.e. send, wait for response, process, send....), loosely of the form
select a,b from table where id = 123
i.e. query a single record on an already indexed field
on an oracle database.
This process takes longer than desired, and doing some metrics on it, I'm sure that 90% of the time is spent server side execution (and transport) rather than client side.
This process naturally can be split into N 'jobs', and its been suggested that this could/should speed up the process.
Naively you would expect it to run N times quicker (with a small overhead to merge the answer).
Given that (loosely) SQL is 'serialised' is this the case though? That would imply that actually it would probably not run quicker at all.
I assume that for an update on a single record (for example) N updates would have to be effectively serialised, but for N reads, this may not be the case.
Which theory is most accurate (or neither)
I'm not a dba, it looks like for reads, reads never block reads, so assuming infinite resources the theory would be that N reads could be run completely in parallel with no blocking. For writes and reads it gets more complex depending on how you set up your transactions/locks but thats out of scope for me.

Searching an algorithm similar to producer-consumer

I would like to ask if someone would have an idea on the best(fastest) algorithm for the following scenario:
X processes generate a list of very large files. Each process generates one file at a time
Y processes are being notified that a file is ready. Each Y process has its own queue to collect the notifications
At a given time 1 X process will notify 1 Y process through a Load Balancer that has the Round Rubin algorithm
Each file has a size and naturally, bigger files will keep both X and Y more busy
Limitations
Once a file gets on a Y process it would be impractical to remove it and move it to another Y process.
I can't think of other limitations at the moment.
Disadvantages to this approach
sometimes X falls behind(files are no longer pushed). It's not really impacted by the queueing system and no matter if I change it it will still have slow/good times.
sometimes Y falls behind(a lot of files gather in the queues). Again, the same thing like before.
1 Y process is busy with a very large file. It also has several small files in its queue that could be taken on by other Y processes.
The notification itself is through HTTP and seems somehow unreliable sometimes. Notifications fail and debugging has not revealed anything.
There are some more details that would help to see the picture more clearly.
Y processes are DB threads/jobs
X processes are web apps
Once files reach the X processes, these would also burn resources from the DB side by querying it. It has an impact on the producing part
Now I considered the following approach:
X will produce files like it has before but will not notify Y. It will hold a buffer (table) to populate the file list
Y will constantly search for files in the buffer and retrieve them itself and store them in its own queue.
Now would this change be practical? Like I said, each Y process has its own queue, it doesn't seem to be efficient to keep it anymore. If so, then I'm still undecided on the next bit:
How to decide which files to fetch
I've read through the knapsack problem and I think that has application if I would have the entire list of files from the beginning which I don't. Actually, I do have the list and the size of each file but I wouldn't know when each file would be ready to be taken.
I've gone through the producer-consumer problem but that centers around a fixed buffer and optimising that but in this scenario the buffer is unlimited and I don't really care if it is large or small.
The next best option would be a greedy approach where each Y process locks on the smallest file and takes it. At first it does appear to be the fastest approach and I'm currently building a simulation to verify that but a second opinion would be fantastic.
Update
Just to be sure that everyone gets the big picture, I'm linking here a fast-done diagram.
Jobs are independent from Processes. They will run at a speed and process how many files are possible.
When a Job finishes with a file it will send a HTTP request to the LB
Each process queues requests (files) coming from the LB
The LB works on a round robin rule
Diagram
The current LB idea is not good
The load balancer as you've described it is a bad idea because it's needlessly required to predict the future, which you are saying is impossible. Moreover, round-robin is a terrible scheduling strategy when jobs have varying lengths.
Just have consumers notify the LB when they're idle. When a new job arrives from a producer, it selects the first idle consumer and sends the job there. When there are no idle consumers, the producer's job is queued in the LB waiting for a free consumer to appear.
This way consumers will always be optimally busy.
You say "Having one queue to serve 100 apps (for example) would be inefficient." This is a huge leap of intuition that's probably wrong. A work queue that's only handling file names can be fast. You need it only to be 100 times faster (because you infer there are 100 consumers) than the average "very large file" handling operation. File handling is normally 10th of seconds or seconds. A queue handler based, say, on an Apache mod or Redis for two random choices, could pretty easily serve 10,000 requests per second. This is a factor of 10 away from being a bottleneck.
If you select from idle consumers on a FIFO basis, the behavior will be round-robin when all jobs are equal length.
If the LB absolutelly cannot queue work
Then let Ty(t) be the total future time needed to complete the work in the queue of consumer y at the current epoch t. The LB's goal is to make Ty(t) values equal for all y and t. This is the ideal.
To get as close as possible to the ideal, it needs an internal model to compute these Ty(t) values. When a new job arrives from a producer at epoch t, it finds consumer y with the the minimum Ty(t) value, assigns the job to this y, and adjusts the model accordingly. This is a variation of the "least time remaining" scheduling strategy, which is optimal for this situation.
The model must inevitably be an approximation. The quality of the approximation will determine its usefulness.
A standard approach (e.g. from OS scheduling), will be to maintain a pair [t, T]_y for each consumer y. T is the estimate of Ty(t) that was computed at the past epoch t. Thus at a later epoch t+d, we can estimate Ty(t+d) as max(T-t,0). The max is because for d>t, the estimated job time has expired, so the consumer should be complete.
The LB uses whatever information it can get to update the model. Examples are estimates of time a job will require (from your description probably based on file size and other characteristics), notification that the consumer has actually finished a job (LB decreases T by the esimated duration of the completed job and updates t), assignment of a new job (LB increases T by the estimated duration of the new job and updates t), and intermediate progress updates of estimated time remaining from consumers during long jobs.
If the information available to the LB is detailed, you will want to replace the total time T in the [t, T]_y pair with a more complete model of the work queued at y: for example a list of estimated job durations, where the head of the list is the one currently being executed.
The more accurate the LB model, the less likely a consumer will starve when work is available, which is what you are trying to avoid.

General approach to count word occurrence in large number of files

This is sort of an algorithm question. To make it clear, I'm not interested in working code but in how to approach the task generally.
We have a server with 4 CPU's, and no databases. There are 100,000 HTML documents, stored on disk. Each document is 2MB in size. We need an efficient way to determine the count of the word "CAMERA" (case insensitive) appearing in that collection.
My approach would be to
parse the HTML document to extract only words
and then sort the words,
then use binary search on that collection.
In other words, I would create threads to let them use all 4 CPU's to parse the HTML documents into a single, large word collection text file, then sort it, and then using binary search.
What do you think of this?
Have you tried grep? That's what I would do.
It will probably take some experimentation to figure out the right way to pass it so much data and make sure ahead of time that the results come out right, because it's going to take a little while.
I would not recommend sorting that much data.
Well, it is not a complete pseudo code answer, but I don't think there is one. To get optimal performance you need to know a LOT on your HW architecture. Here are the notes:
There is no need to sort the data at all, nor use binary search. Just read the files (read each file sequentially from disk) and while doing so search if the word camera appears in it.
The bottle neck in the program will most likely be IO (disk reads), since disk access is MUCH slower then CPU calculations. So, to optimize the program - one should focus on optimizing the disk reads.
To optimize the disk reads, one should know the architecture of it. For example, if you have only one disk (and no RAID), there is really no point in multi-threading, assuming the disk can process a single request at the same time. If it is the case - use a single thread.
However, if you have multiple disks - it does not matter how many cores you have, you should spawn #disks threads (assuming the files are evenly seperated among the disks). Since it is the bottle-neck, by having multiple threads that concurrently requesting the data from the disks, you make all of them work, and effectively reduce the time consumption significantly.
Something like?
htmlDocuments = getPathsOfHtmlDocuments()
threadsafe counter = new Counter(0)
scheduler = scheduler with max 4 threads
for(htmlDocument: htmlDocuments){
scheduler.schedule(new SearchForCameraJob("Camera",htmlDocument,counter))
}
wait while scheduler.hasUnfinishedJobs
print Found camera +counter+ times
class SearchForCameraJob(searchString, pathToFile, counter){
document = readFile(pathToFile);
while(document.findNext(searchString)){
counter.increment();
}
}
If your documents are located on single local hard drive, you will be constrained by I/O, not CPU.
I would use very simple approach of simply serially loading every file into memory and scanning memory searching for target word and increasing counter.
If you try to use 4 threads in attempt to speed it up (like 25000 files to every thread), it will likely make it slower, because I/O does not like overlapping access patterns from competing processes/threads.
If, however, files are spread accross multiple hard drives, you should start as many threads as you have drives, and each thread should read data from that drive only.
You can use Boyer-Moore algorithm. Is difficult to say what programming language is proper for make such of application, but you can make it in C++ so as to directly optimize your native code. Obviously you need to use multithreading.
Of the HTML document parsing libraries you can choose Xerces-C++.

How can I determine the appropriate number of tasks with GCD or similar?

I very often encounter situations where I have a large number of small operations that I want to carry out independently. In these cases, the number of operations is so large compared to the actual time each operation takes so simply creating a task for each operation is inappropriate due to overhead, even though GCD overhead is typically low.
So what you'd want to do is split up the number of operations into nice chunks where each task operates on a chunk. But how can I determine the appropriate number of tasks/chunks?
Testing, and profiling. What makes sense, and what works well is application specific.
Basically you need to decide on two things:
The number of worker processes/threads to generate
The size of the chunks they will work on
Play with the two numbers, and calculate their throughput (tasks completed per second * number of workers). Somewhere you'll find a good equilibrium between speed, number of workers, and number of tasks in a chunk.
You can make finding the right balance even simpler by feeding your workers a bunch of test data, essentially a benchmark, and measuring their throughput automatically while adjusting these two variables. Record the throughput for each combination of worker size/task chunk size, and output it at the end. The highest throughput is your best combination.
Finally, if how long a particular task takes really depends on the task itself (e.g. some tasks take X time, and while some take X*3 time, then you can can take a couple of approaches. Depending on the nature of your incoming work, you can try one of the following:
Feed your benchmark historical data - a bunch of real-world data to be processed that represents the actual kind of work that will come into your worker grid, and measure throughput using that example data.
Generate random-sized tasks that cross the spectrum of what you think you'll see, and pick the combination that seems to work best on average, across multiple sizes of tasks
If you can read the data in a task, and the data will give you an idea of whether or not that task will take X time, or X*3 (or something in between) you can use that information before processing the tasks themselves to dynamically adjust the worker/task size to achieve the best throughput depending on current workload. This approach is taken with Amazon EC2 where customers will spin-up extra VMs when needed to handle higher load, and spin them back down when load drops, for example.
Whatever you choose, any unknown speed issue should almost always involve some kind of demo benchmarking, if the speed at which it runs is critical to the success of your application (sometimes the time to process is so small, that it's negligible).
Good luck!

Prioritizing Erlang nodes

Assuming I have a cluster of n Erlang nodes, some of which may be on my LAN, while others may be connected using a WAN (that is, via the Internet), what are suitable mechanisms to cater for a) different bandwidth availability/behavior (for example, latency induced) and b) nodes with differing computational power (or even memory constraints for that matter)?
In other words, how do I prioritize local nodes that have lots of computational power, over those that have a high latency and may be less powerful, or how would I ideally prioritize high performance remote nodes with high transmission latencies to specifically do those processes with a relatively huge computations/transmission (that is, completed work per message ,per time unit) ratio?
I am mostly thinking in terms of basically benchmarking each node in a cluster by sending them a benchmark process to run during initialization, so that the latencies involved in messasing can be calculated, as well as the overall computation speed (that is, using a node-specific timer to determine how fast a node terminates with any task).
Probably, something like that would have to be done repeatedly, on the one hand in order to get representative data (that is, averaging data) and on the other hand it might possibly even be useful at runtime in order to be able to dynamically adjust to changing runtime conditions.
(In the same sense, one would probably want to prioritize locally running nodes over those running on other machines)
This would be meant to hopefully optimize internal job dispatch so that specific nodes handle specific jobs.
We've done something similar to this, on our internal LAN/WAN only (WAN being for instance San Francisco to London). The problem boiled down to a combination of these factors:
The overhead in simply making a remote call over a local (internal) call
The network latency to the node (as a function of the request/result payload)
The performance of the remote node
The compute power needed to execute the function
Whether batching of calls provides any performance improvement if there was a shared "static" data set.
For 1. we assumed no overhead (it was negligible compared to the others)
For 2. we actively measured it using probe messages to measure round trip time, and we collated information from actual calls made
For 3. we measured it on the node and had them broadcast that information (this changed depending on the load current active on the node)
For 4 and 5. we worked it out empirically for the given batch
Then the caller solved to get the minimum solution for a batch of calls (in our case pricing a whole bunch of derivatives) and fired them off to the nodes in batches.
We got much better utilization of our calculation "grid" using this technique but it was quite a bit of effort. We had the added advantage that the grid was only used by this environment so we had a lot more control. Adding in an internet mix (variable latency) and other users of the grid (variable performance) would only increase the complexity with possible diminishing returns...
The problem you are talking about has been tackled in many different ways in the context of Grid computing (e.g, see Condor). To discuss this more thoroughly, I think some additional information is required (homogeneity of the problems to be solved, degree of control over the nodes [i.e. is there unexpected external load etc.?]).
Implementing an adaptive job dispatcher will usually require to also adjust the frequency with which you probe the available resources (otherwise the overhead due to probing could exceed the performance gains).
Ideally, you might be able to use benchmark tests to come up with an empirical (statistical) model that allows you to predict the computational hardness of a given problem (requires good domain knowledge and problem features that have a high impact on execution speed and are simple to extract), and another one to predict communication overhead. Using both in combination should make it possible to implement a simple dispatcher that bases its decisions on the predictive models and improves them by taking into account actual execution times as feedback/reward (e.g., via reinforcement learning).

Resources