I've been away from parallel programming for a long period of time and I am trying to figure out the best method for coordinating sending large amounts of data between many processors with a complicated dependency structure. For example, I might to send data to/from the following processes:
int process_1_dependencies[] = {2,3,5,6}
int process_2_dependencies[] = {1}
int process_3_dependencies[] = {1,4,5}
int process_4_dependencies[] = {3,5,6}
int process_5_dependencies[] = {1,3,4,6}
int process_6_dependencies[] = {1,4,5,7}
int process_7_dependencies[] = {6,8}
int process_8_dependencies[] = {7}
The obvious, and stupid, way of doing this would be do something like:
for(int i = 0; i < world_size; i++)
{
for(int j = 0; j < dependency_length; j++)
{
if (i == my_rank)
{
mpi_irecv(...,source=dependency[j],)
}
else
{
if (i == dependency[j])
{
mpi_isend(...,dest=dependency[j])
}
}
}
// blocking stuff?
}
I'm not actually sure if this would work once you have 100's of communications going and in anycase, it seems super inefficient. It's at least O(N) and only allows a single process to be receiving at once. A better way would be to use blocking and ensure that independent processes are simultaneously exchanging information. But that becomes quite complicated and requires optimizing which processes are simultaneously sending and receiving.
Am I just completely overthinking this? Is it safe to do something like this (provided that every sending process has a receiving pair):
for(int i = 0; i < dependency_length; i++)
{
mpi_isend(..., dest=dependency[i], ...)
mpi_irecv(..., source=dependency[i], ...)
}
//blocking stuff
sorry for the lack of focus in the question. I'm away from my computer so I can't really test it out, and in even if it did would I guess I'm not confident that it is saleable and that the buffers would keep working for arbitrary numbers of processes?
To avoid queueing a large number of messages and to avoid opaque deadlock problems, you can also employ a single call to MPI_Alltoallv, where all sends and receives are done for you automatically, and---with crossed fingers--- even hope that you MPI implemetation is able to optimize all communication on its own. The prototype is
MPI_Alltoallv
(
sendbuf, // buffer containing all data needed by other ranks in comm
sendcounts, // number of elements to send to each rank in comm
sdispls, // offsets in sendbuf per rank in comm
sendtype, // MPI datatype of the sent data
recvbuf, // buffer to contain all data needed by this rank
recvcounts, // number of elements to receive per rank in comm
rdispls, // offsets in recvbuf per rank in comm
recvtype, // MPI datatype of the received data
comm // the communicator
);
where sendcounts would be directly related to your process_X_dependencies; it would contain non-zero values at positions listed by process_X_dependencies.
Related
When MPI_Send buffer size is 100 program works, but it stucks when it is 1000 or greater. Why?
if(id == 0){
rgb_image = stbi_load(argv[1], &width, &height, &bpp, CHANNEL_NUM);
for(int i = 0; i < size -1; i++)
MPI_Send(rgb_image,1000,MPI_UINT8_T,i,0,MPI_COMM_WORLD);
}
uint8_t *part = (uint8_t*) malloc(sizeof(uint8_t)*(1000));
if(id != size-1 && size > 1)
MPI_Recv(part,1000,MPI_UINT8_T,0,0,MPI_COMM_WORLD,MPI_STATUS_IGNORE);
This program is not valid w.r.t. MPI Standard since there is no matching receive (on rank 0) for
MPI_Send(..., dest=0, ...)
MPI_Send() is allowed to block until a matching receive is posted (and that generally happens when the message is "large") ... and the required matching receive never gets posted.
A typical fix would be to issue a MPI_Irecv(...,src = 0,...) on rank 0 before the MPI_Send() (and MPI_Wait() after), or to handle 0 -> 0 communication with MPI_Sendrecv().
That being said, it would likely more efficient to create a communicator will all the ranks minus the last one, and MPI_Bcast() in this communicator.
If a program works for small buffers but not for large, you are probably running into "eager sends". Normally, a send & receive transaction involves the sender & receiver talking back and forth, confirming that the data went across. This is overhead, so for small messages, many MPIs will just send the data, without confirmation. The data then goes into some secret buffer on the receiver.
But this means that your program will "succeed" if it's not a correct program. As is the case here. See #Giles answer.
So I've got N asynchronous, timestamped data streams. Each stream has a fixed-ish rate. I want to process all of the data, but the catch is that I must process the data in order as close to the time that the data arrived as possible (it is a real-time streaming application).
So far, my implementation has been to create a fixed window of K messages which I sort by timestamp using a priority queue. I then process the entirety of this queue in order before moving on to the next window. This is okay, but its less than ideal because it creates lag proportional to the size of the buffer, and also will sometimes lead to dropped messages if a message arrives just after the end of the buffer has been processed. It looks something like this:
// Priority queue keeping track of the data in timestamp order.
ThreadSafeProrityQueue<Data> q;
// Fixed buffer size
int K = 10;
// The last successfully processed data timestamp
time_t lastTimestamp = -1;
// Called for each of the N data streams asyncronously
void receiveAsyncData(const Data& dat) {
q.push(dat.timestamp, dat);
if (q.size() > K) {
processQueue();
}
}
// Process all the data in the queue.
void processQueue() {
while (!q.empty()) {
const auto& data = q.top();
// If the data is too old, drop it.
if (data.timestamp < lastTimestamp) {
LOG("Dropping message. Too old.");
q.pop();
continue;
}
// Otherwise, process it.
processData(data);
lastTimestamp = data.timestamp;
q.pop();
}
}
Information about the data: they're guaranteed to be sorted within their own stream. Their rates are between 5 and 30 hz. They consist of images and other bits of data.
Some examples of why this is harder than it appears. Suppose I have two streams, A and B both running at 1 Hz and I get the data in the following order:
(stream, time)
(A, 2)
(B, 1.5)
(A, 3)
(B, 2.5)
(A, 4)
(B, 3.5)
(A, 5)
See how if I processed the data in order of when I received them, B would always get dropped? that's what I wanted to avoid.Now in my algorithm, B would get dropped every 10th frame, and I would process the data with a lag of 10 frames into the past.
I would suggest a producer/consumer structure. Have each stream put data into the queue, and a separate thread reading the queue. That is:
// your asynchronous update:
void receiveAsyncData(const Data& dat) {
q.push(dat.timestamp, dat);
}
// separate thread that processes the queue
void processQueue()
{
while (!stopRequested)
{
data = q.pop();
if (data.timestamp >= lastTimestamp)
{
processData(data);
lastTimestamp = data.timestamp;
}
}
}
This prevents the "lag" that you see in your current implementation when you're processing a batch.
The processQueue function is running in a separate, persistent thread. stopRequested is a flag that the program sets when it wants to shut down--forcing the thread to exit. Some people would use a volatile flag for this. I prefer to use something like a manual reset event.
To make this work, you'll need a priority queue implementation that allows concurrent updates, or you'll need to wrap your queue with a synchronization lock. In particular, you want to make sure that q.pop() waits for the next item when the queue is empty. Or that you never call q.pop() when the queue is empty. I don't know the specifics of your ThreadSafePriorityQueue, so I can't really say exactly how you'd write that.
The timestamp check is still necessary because it's possible for a later item to be processed before an earlier item. For example:
Event received from data stream 1, but thread is swapped out before it can be added to the queue.
Event received from data stream 2, and is added to the queue.
Event from data stream 2 is removed from the queue by the processQueue function.
Thread from step 1 above gets another time slice and item is added to the queue.
This isn't unusual, just infrequent. And the time difference will typically be on the order of microseconds.
If you regularly get updates out of order, then you can introduce an artificial delay. For example, in your updated question you show messages coming in out of order by 500 milliseconds. Let's assume that 500 milliseconds is the maximum tolerance you want to support. That is, if a message comes in more than 500 ms late, then it will get dropped.
What you do is add 500 ms to the timestamp when you add the thing to the priority queue. That is:
q.push(AddMs(dat.timestamp, 500), dat);
And in the loop that processes things, you don't dequeue something before its timestamp. Something like:
while (true)
{
if (q.peek().timestamp <= currentTime)
{
data = q.pop();
if (data.timestamp >= lastTimestamp)
{
processData(data);
lastTimestamp = data.timestamp;
}
}
}
This introduces a 500 ms delay in the processing of all items, but it prevents dropping "late" updates that fall within the 500 ms threshold. You have to balance your desire for "real time" updates with your desire to prevent dropping updates.
There's always be a lag and that lag will be determined by how long you'll be willing to wait for your slowest "fixed-ish rate" stream.
Suggestion:
keep the buffer
keep an array of bool flags with the meaning:"if position ix is true, in the buffer there is at least a sample originated from stream ix"
sort/process as soon as you have all flag to true
Not full-proof (each buffer will be sorted, but from one buffer to another you may have timestamp inversion), but perhaps good enough?
Playing around with the count of "satisfied" flags to trigger the processing (at step 3) may be used to make the lag smaller, but with the risk of more inter-buffer timestamp inversions. In extreme, accepting the processing with only one satisfied flag means "push a frame as soon as you receive it, timestamp sorting be damned".
I mentioned this to support my feeling that lag/timestamp inversions balance is inherent to your problem - except for absolutely equal framerates, there will be perfect solution in which one of the sides is not sacrificed.
Since a "solution" will be an act of balancing, any solution will require gathering/using extra information to help decisions (e.g. that "array of flags"). If what I suggested sounds silly for your case (may well be, the details you chose to share aren't too many), start thinking what metrics will be relevant for your targeted level of "quality of experience" and use additional data structures to help gathering/processing/using those metrics.
Lets just say I want to fragment some data units into packets (max size per packet is lets say 1024 bytes). Each data unit can be of variable size, say:
a = 20 bytes
b = 1000 bytes
c = 10 bytes
d = 800 bytes
Can anyone please suggest any efficient algorithm to create packets with such random data efficiently utilizing the bandwidth? I cannot split the individual data units into bytes...they go whole inside a packet.
EDIT: The ordering of data units is of no concern!
There are several different ways, depending on your requirements and how much time you want to spend on it. The general problem, as #amit mentioned in comments, is NP-Hard. But you can get some improvement with some simple changes.
Before we go there, are you sure you really need to do this? Most networking layers have a packet-sized (or larger) buffer. When you write to the network, it puts your data in that buffer. If you don't fill the buffer completely, the code will delay briefly before sending. If you add more data during that delay, the new data is added to the buffer. The buffer is sent once it fills, or after the delay timeout expires.
So if you have a loop that writes one byte at a time to the network, it's not like you'll be creating a large number of one-byte packets.
On the receiving side, the lowest level networking layer receives an entire packet, but there's no guarantee that your call to receive the data will get the entire packet. That is, the sender might send an 800 byte packet, but on the receiving end the first call to read might only return 50 or 273 bytes.
This depends, of course, at what level you're reading the data. If you're talking about something like Java or .NET, where your interface to the network stack is through a socket, you almost certainly can't guarantee that a call to socket.Read() will return an entire packet.
Now, if you can guarantee that every call to read returns an entire packet, then the easiest way to pack things would be to serialize everything into one big buffer and then send it out in multiple 1,024-byte packets. You'll want to create a header at the front of the first packet that says how many total bytes will be sent, so the receiver knows what to expect. The result will be a bunch of 1,024-byte packets, potentially followed by a final packet that is somewhat smaller.
If you want to make sure that a data object is fully contained within a single packet, then you have to do something like:
add a to buffer
if remaining buffer < size of b
send buffer
clear buffer
add b to buffer
if remaining buffer < size of c
send buffer
clear buffer
add c to buffer
... etc ...
Here's some simple JavaScript pseudo code. The packets will stay ordered and the bandwidth will be used optimally.
packets = [];
PACKET_SIZE = 1024;
currentPacket = [];
function write(data) {
var len = currentPacket.length + data.length;
if(len < PACKET_SIZE) {
currentPacket = currentPacket.concat(data);
} else if(len === PACKET_SIZE) {
packets.push(currentPacket.concat(data));
currentPacket = [];
} else { // if(len > PACKET_SIZE) {
packets.push(currentPacket);
currentPacket = data;
}
}
function flush() {
if(currentPacket.length > 0) {
packets.push(currentPacket);
currentPacket = [];
}
}
write(data20bytes);
write(data1000bytes);
write(data10bytes);
write(data800bytes);
flush();
EDIT Since you have all of the data chunks and you want to optimally package them out of order (bin packing) then you left with trying every permutation of the chunks for an exact answer or compromising with an best guess/first fit type algorithm.
I'm trying to solve the exercice 5.10 of the book
"Foundations of Multithreaded, Parallel, and Distributed Programming".
The exercice is
"Assume one producer process and N consumer processes share a bounded buffer having B slots. The producer deposits messages in the buffer; consumers fetch them. Every message deposited by the producer is to be received by all N consumers. Futthermore, each consumer is to receive the messages in the order theu were deposited. However, consumers can receive messages at different times. For example, one consumer could receive up to B more messages than another if the second consumer is slow.
Develop a monitor that implements this kind of communication. Use Signal and Continue discipline."
Can someone help me, please?
Thank you very much!
--
EDIT:
I'm commenting now what I already made (cause I thought that the question was very big if I wrote everything that).
/* creating a buffer of B positions. */
global buffer[B];
Monitor {
cond ok_write;
cond ok_read;
int stamp_buffer[B] = [0, 0, .., 0]
request_write (int pos){
if (stamp_buffer[pos] > 0)
wait(ok_write);
write_message (buufer[pos]);
stamp_buffer[pos] = N;
signalAll (ok_read);
}
request_read (int pos){
if (stamp_buffer[pos] == 0)
wait (ok_read);
stamp_buffer[pos] --;
}
release_read (int pos){
if (stamp_buffer[pos]==0)
signal(ok_write);
}
}
So, I think that I still have that problem: "A reader can read the same message two times."
The basic idea of my algorithm is:
The writer write in a position "pos" and set the value of stamp[pos] to N.
Then, when each reader read the position pos, it do stamp[pos] - 1.
So, if stamp[pos] is zero, the message buffer[pos] was already readed N times and the writer can write in this position again.
But, if some reader read a message two times (or more), the writer can wirte a new message in the position pos and some reader will not read the old message.
This is a bit of a side project I have taken on to solve a no-fix issue for work. Our system outputs a code to represent a combination of things on another thing. Some example codes are:
9-9-0-4-4-5-4-0-2-0-0-0-2-0-0-0-0-0-2-1-2-1-2-2-2-4
9-5-0-7-4-3-5-7-4-0-5-1-4-2-1-5-5-4-6-3-7-9-72
9-15-0-9-1-6-2-1-2-0-0-1-6-0-7
The max number in one of the slots I've seen so far is about 150 but they will likely go higher.
When the system was designed there was no requirement for what this code would look like. But now the client wants to be able to type it in by hand from a sheet of paper, something the code above isn't suited for. We've said we won't do anything about it, but it seems like a fun challenge to take on.
My question is where is a good place to start loss-less compressing this code? Obvious solutions such as store this code with a shorter key are not an option; our database is read only. I need to build a two way method to make this code more human friendly.
1) I agree that you definately need a checksum - data entry errors are very common, unless you have really well trained staff and independent duplicate keying with automatic crosss-checking.
2) I suggest http://en.wikipedia.org/wiki/Huffman_coding to turn your list of numbers into a stream of bits. To get the probabilities required for this, you need a decent sized sample of real data, so you can make a count, setting Ni to the number of times number i appears in the data. Then I suggest setting Pi = (Ni + 1) / (Sum_i (Ni + 1)) - which smooths the probabilities a bit. Also, with this method, if you see e.g. numbers 0-150 you could add a bit of slack by entering numbers 151-255 and setting them to Ni = 0. Another way round rare large numbers would be to add some sort of escape sequence.
3) Finding a way for people to type the resulting sequence of bits is really an applied psychology problem but here are some suggestions of ideas to pinch.
3a) Software licences - just encode six bits per character in some 64-character alphabet, but group characters in a way that makes it easier for people to keep place e.g. BC017-06777-14871-160C4
3b) UK car license plates. Use a change of alphabet to show people how to group characters e.g. ABCD0123EFGH4567IJKL...
3c) A really large alphabet - get yourself a list of 2^n words for some decent sized n and encode n bits as a word e.g. GREEN ENCHANTED LOGICIAN... -
i worried about this problem a while back. it turns out that you can't do much better than base64 - trying to squeeze a few more bits per character isn't really worth the effort (once you get into "strange" numbers of bits encoding and decoding becomes more complex). but at the same time, you end up with something that's likely to have errors when entered (confusing a 0 with an O etc). one option is to choose a modified set of characters and letters (so it's still base 64, but, say, you substitute ">" for "0". another is to add a checksum. again, for simplicity of implementation, i felt the checksum approach was better.
unfortunately i never got any further - things changed direction - so i can't offer code or a particular checksum choice.
ps i realised there's a missing step i didn't explain: i was going to compress the text into some binary form before encoding (using some standard compression algorithm). so to summarize: compress, add checksum, base64 encode; base 64 decode, check checksum, decompress.
This is similar to what I have used in the past. There are certainly better ways of doing this, but I used this method because it was easy to mirror in Transact-SQL which was a requirement at the time. You could certainly modify this to incorporate Huffman encoding if the distribution of your id's is non-random, but it's probably unnecessary.
You didn't specify language, so this is in c#, but it should be very easy to transition to any language. In the lookup you'll see commonly confused characters are omitted. This should speed up entry. I also had the requirement to have a fixed length, but it would be easy for you to modify this.
static public class CodeGenerator
{
static Dictionary<int, char> _lookupTable = new Dictionary<int, char>();
static CodeGenerator()
{
PrepLookupTable();
}
private static void PrepLookupTable()
{
_lookupTable.Add(0,'3');
_lookupTable.Add(1,'2');
_lookupTable.Add(2,'5');
_lookupTable.Add(3,'4');
_lookupTable.Add(4,'7');
_lookupTable.Add(5,'6');
_lookupTable.Add(6,'9');
_lookupTable.Add(7,'8');
_lookupTable.Add(8,'W');
_lookupTable.Add(9,'Q');
_lookupTable.Add(10,'E');
_lookupTable.Add(11,'T');
_lookupTable.Add(12,'R');
_lookupTable.Add(13,'Y');
_lookupTable.Add(14,'U');
_lookupTable.Add(15,'A');
_lookupTable.Add(16,'P');
_lookupTable.Add(17,'D');
_lookupTable.Add(18,'S');
_lookupTable.Add(19,'G');
_lookupTable.Add(20,'F');
_lookupTable.Add(21,'J');
_lookupTable.Add(22,'H');
_lookupTable.Add(23,'K');
_lookupTable.Add(24,'L');
_lookupTable.Add(25,'Z');
_lookupTable.Add(26,'X');
_lookupTable.Add(27,'V');
_lookupTable.Add(28,'C');
_lookupTable.Add(29,'N');
_lookupTable.Add(30,'B');
}
public static bool TryPCodeDecrypt(string iPCode, out Int64 oDecryptedInt)
{
//Prep the result so we can exit without having to fiddle with it if we hit an error.
oDecryptedInt = 0;
if (iPCode.Length > 3)
{
Char[] Bits = iPCode.ToCharArray(0,iPCode.Length-2);
int CheckInt7 = 0;
int CheckInt3 = 0;
if (!int.TryParse(iPCode[iPCode.Length-1].ToString(),out CheckInt7) ||
!int.TryParse(iPCode[iPCode.Length-2].ToString(),out CheckInt3))
{
//Unsuccessful -- the last check ints are not integers.
return false;
}
//Adjust the CheckInts to the right values.
CheckInt3 -= 2;
CheckInt7 -= 2;
int COffset = iPCode.LastIndexOf('M')+1;
Int64 tempResult = 0;
int cBPos = 0;
while ((cBPos + COffset) < Bits.Length)
{
//Calculate the current position.
int cNum = 0;
foreach (int cKey in _lookupTable.Keys)
{
if (_lookupTable[cKey] == Bits[cBPos + COffset])
{
cNum = cKey;
}
}
tempResult += cNum * (Int64)Math.Pow((double)31, (double)(Bits.Length - (cBPos + COffset + 1)));
cBPos += 1;
}
if (tempResult % 7 == CheckInt7 && tempResult % 3 == CheckInt3)
{
oDecryptedInt = tempResult;
return true;
}
return false;
}
else
{
//Unsuccessful -- too short.
return false;
}
}
public static string PCodeEncrypt(int iIntToEncrypt, int iMinLength)
{
int Check7 = (iIntToEncrypt % 7) + 2;
int Check3 = (iIntToEncrypt % 3) + 2;
StringBuilder result = new StringBuilder();
result.Insert(0, Check7);
result.Insert(0, Check3);
int workingNum = iIntToEncrypt;
while (workingNum > 0)
{
result.Insert(0, _lookupTable[workingNum % 31]);
workingNum /= 31;
}
if (result.Length < iMinLength)
{
for (int i = result.Length + 1; i <= iMinLength; i++)
{
result.Insert(0, 'M');
}
}
return result.ToString();
}
}