Which index varies fastest in a VB array? - vb6

When using a Visual Basic two-dimensional array, which index varies fastest? In other words, when filling in an array, should I write...
For i = 1 To 30
For j = 1 To 30
myarray (i,j) = something
Next
Next
or
For i = 1 To 30
For j = 1 To 30
myarray (j, i) = something
Next
Next
(or alternatively does it make very much difference)?

Column major. VB6 uses COM SAFEARRAYs and lays them out in column-major order. The fastest access is like this (although it won't matter if you only have 30x30 elements).
For i = 1 To 30
For j = 1 To 30
myarray (j, i) = something
Next
Next
If you really want to speed up your array processing, consider the tips in Advanced Visual Basic by Matt Curland, which shows you how to poke around inside the underlying SAFEARRAY structures.
For instance accessing a 2D SAFEARRAY is considerably slower than accessing a 1D SAFEARRAY, so in order to set all array entries to the same value it is quicker to bypass VB6's SAFEARRAY descriptor and temporarily make one of your own. Page 33.
You should also consider turning on "Remove array bounds checks" in the project properties compile options.

I don't know if (or where) this is specified. It might be left as 'implementation defined'.
But I would expect the first index to be the 'lower' dimension, ie the big chunks, and the following index positions to be ever more fine-grained.
Edit: Seems I was wrong. VB6 uses a Column-first approach.
Does it make much difference?
You would have to measure but using the lower higher dimension for the outer loop would allow the compiler to generate faster code and could make better use of the processor cache (locality). But with a size=30 I wouldn't expect much difference.

Related

A memory efficient way for a randomized single pass over a set of indices

I have a big file (about 1GB) which I am using as a basis to do some data integrity testing. I'm using Python 2.7 for this because I don't care so much about how fast the writes happen, my window for data corruption should be big enough (and it's easier to submit a Python script to the machine I'm using for testing)
To do this I'm writing a sequence of 32 bit integers to memory as a background process while other code is running, like the following:
from struct import pack
with open('./FILE', 'rb+', buffering=0) as f:
f.seek(0)
counter = 1
while counter < SIZE+1:
f.write(pack('>i', counter))
counter+=1
Then after I do some other stuff it's very easy to see if we missed a write since there will be a gap instead of the sequential increasing sequence. This works well enough. My problem is some data corruption cases might only be caught with random I/O (not sequential like this) based on how we track changes to files
So what I need is a method for performing a single pass of random I/O over my 1GB file, but I can't really store this in memory since 1GB ~= 250 million 4-byte integers. Considered chunking up the file into smaller pieces and indexing those, maybe 500 KB or something, but if there is a way to write a generator that can do the same job that would be awesome. Like this:
from struct import pack
def rand_index_generator:
generator = RAND_INDEX(1, MAX+1, NO REPLACEMENT)
counter = 0
while counter < MAX:
counter+=1
yield generator.next_index()
with open('./FILE', 'rb+', buffering=0) as f:
counter = 1
for index in rand_index_generator:
f.seek(4*index)
f.write(pack('>i', counter))
counter+=1
I need it:
Not to run out of memory (so no pouring the random sequence into a list)
To be reproducible so I can verify these values in the same order later
Is there a way to do this in Python 2.7?
Just to provide an answer for anyone who has the same problem, the approach that I settled on was this, which worked well enough if you don't need something all that random:
def rand_index_generator(a,b):
ctr=0
while True:
yield (ctr%b)
ctr+=a
Then, initialize it with your index size, b and a value a which is coprime to b. This is easy to choose if b is a power of two, since a just needs to be an odd number to make sure it isn't divisible by 2. It's a hard requirement for the two values to be coprime, so you might have to do more work if your index size b is not such an easily factored number as a power of 2.
index_gen = rand_index_generator(1934919251, 2**28)
Then each time you want the new index you use index_gen.next() and this is guaranteed to iterate over numbers between [0,2^28-1] in a semi-randomish manner depending on your choice of 'a'
There's really no point in picking an a value larger than your index size, since the mod gets rid of the remainder anyways. This isn't a very good approach in terms of randomness, but it's very efficient in terms of memory and speed which is what I care about for simulating this write workload.

Why are my nested for loops taking so long to compute?

I have a code that generates all of the possible combinations of 4 integers between 0 and 36.
This will be 37^4 numbers = 1874161.
My code is written in MATLAB:
i=0;
for a = 0:36
for b= 0:36
for c = 0:36
for d = 0:36
i=i+1;
combination(i,:) = [a,b,c,d];
end
end
end
end
I've tested this with using the number 3 instead of the number 36 and it worked fine.
If there are 1874161 combinations, and with An overly cautions guess of 100 clock cycles to do the additions and write the values, then if I have a 2.3GHz PC, this is:
1874161 * (1/2300000000) * 100 = 0.08148526086
A fraction of a second. But It has been running for about half an hour so far.
I did receive a warning that combination changes size every loop iteration, consider predefining its size for speed, but this can't effect it that much can it?
As #horchler suggested you need to preallocate the target array
This is because your program is not O(N^4) without preallocation. Each time you add new line to array it need to be resized, so new bigger array is created (as matlab do not know how big array it will be it probably increase only by 1 item) and then old array is copied into it and lastly old array is deleted. So when you have 10 items in array and adding 11th, then a copying of 10 items is added to iteration ... if I am not mistaken that leads to something like O(N^12) which is massively more huge
estimated as (N^4)*(1+2+3+...+N^4)=((N^4)^3)/2
Also the reallocation process is increasing in size breaching CACHE barriers slowing down even more with increasing i above each CACHE size barrier.
The only solution to this without preallocation is to store the result in linked list
Not sure Matlab has this option but that will need one/two pointer per item (32/64 bit value) which renders your array 2+ times bigger.
If you need even more speed then there are ways (probably not for Matlab):
use multi-threading for array filling is fully parallelisable
use memory block copy (rep movsd) or DMA the data is periodically repeating
You can also consider to compute the value from i on the run instead of remember the whole array, depending on the usage it can be faster in some cases...

How to run a method in parallel using Julia?

I was reading Parallel Computing docs of Julia, and having never done any parallel coding, I was left wanting a gentler intro. So, I thought of a (probably) simple problem that I couldn't figure out how to code in parallel Julia paradigm.
Let's say I have a matrix/dataframe df from some experiment. Its N rows are variables, and M columns are samples. I have a method pwCorr(..) that calculates pairwise correlation of rows. If I wanted an NxN matrix of all the pairwise correlations, I'd probably run a for-loop that'd iterate for N*N/2 (upper or lower triangle of the matrix) and fill in the values; however, this seems like a perfect thing to parallelize since each of the pwCorr() calls are independent of others. (Am I correct in thinking this way about what can be parallelized, and what cannot?)
To do this, I feel like I'd have to create a DArray that gets filled by a #parallel for loop. And if so, I'm not sure how this can be achieved in Julia. If that's not the right approach, I guess I don't even know where to begin.
This should work, first you need to propagate the top level variable (data) to all the workers:
for pid in workers()
remotecall(pid, x->(global data; data=x; nothing), data)
end
then perform the computation in chunks using the DArray constructor with some fancy indexing:
corrs = DArray((20,20)) do I
out=zeros(length(I[1]),length(I[2]))
for i=I[1], j=I[2]
if i<j
out[i-minimum(I[1])+1,j-minimum(I[2])+1]= 0.0
else
out[i-minimum(I[1])+1,j-minimum(I[2])+1] = cor(vec(data[i,:]), vec(data[j,:]))
end
end
out
end
In more detail, the DArray constructor takes a function which takes a tuple of index ranges and returns a chunk of the resulting matrix which corresponds to those index ranges. In the code above, I is the tuple of ranges with I[1] being the first range. You can see this more clearly with:
julia> DArray((10,10)) do I
println(I)
return zeros(length(I[1]),length(I[2]))
end
From worker 2: (1:10,1:5)
From worker 3: (1:10,6:10)
where you can see it split the array into two chunks on the second axis.
The trickiest part of the example was converting from these 'global' index ranges to local index ranges by subtracting off the minimum element and then adding back 1 for the 1 based indexing of Julia.
Hope that helps!

On Two Plus Two poker hand evaluator, how do you get the best 5 cards combination out of the 7 that you passed to it?

Is it possible to extract that info from the equivalence value?
I understand that the higher the equivalence value the better. Category and rank can also be extracted from the equivalence value. But is there a way to find out what the best 5 cards combination are from the 7 that you passed to it?
Twoplustwo is the fastest poker hand evaluator around (14-15 million hands evaluated per second). You give your 7 cards to it and it spits out a hand equivalence value. The higher the value, the better is card is.
Here's a great summary on twoplustwo: http://www.codingthewheel.com/archives/poker-hand-evaluator-roundup#2p2
Cached version of the link above:
http://web.archive.org/web/20130116102452/http://www.codingthewheel.com/archives/poker-hand-evaluator-roundup
(disclaimer: I'm working on a poker software that does, amongst other, hand evaluations)
You give your 7 cards to it and it spits out a hand equivalence value.
there are several evaluators doing that and if I'm not mistaken some of them compute more than a hundred million hands per second (!). These evaluators basically come down to 7 array lookups in a gigantic array and it only takes a few cycles (despite the cache misses) to evaluate a hand. I don't think 14-15 millions / second is anywhere near the fastest. CactusKev's evaluator is 10x faster if I'm not mistaken.
Now to answer your question:
how do you get the best 5 cards combination out of the 7 that you
passed to it?
Well it doesn't tell you but once you have the strength of the hand it can become very easy: you don't need to re-invent the wheel.
You can use the strength to simplify your "best five out of seven" computation.
You could also use other libraries, giving your directly the five best card (instead of just their strength) or you can use the strength to find the five best cards yourself.
I'll just give a few examples...
You know you have a full house (a.k.a. a "boat"), then you know that you're looking for three cards that have the same rank and then the best pair (if there are two pairs, but you're sure to find at least one, because the evaluator told you you have a boat).
You know you have a straight: find five cards that follow each other, starting from the best one (beware of the special wheel case case).
You could also get a bit fancier for the straight: you could take the strength of every possible straight and compare the strength that the evaluator gives you with these. If it matches, say, a ten-high straight, then simply look for any T, 9, 8, 7 and 6 card (no matter the suit).
You know you have "no pair": simply take the five highest card you find
etc.
There are only a few different ranks... They could be, for example:
NO_PAIR
ONE_PAIR
TWO_PAIRS
SET
STRAIGHT
FLUSH
FULL_HOUSE
FOUR_OF_A_KIND
STRAIGHT_FLUSH
(you could of course create intermediate "wheel straight" and "wheel straight flush" and "royal flush" cases if you want, etc.)
Once you know the which type of hand your hand is (thanks to the fast evaluator you're using), simply switch to a piece of code that finds the five best out of seven for that particular hand.
I think it's a good way to proceed because you leverage the ultra-fast evaluator and it then greatly simplifies your logic.
At startup, you'd need to compute the strenth once, for example by computing:
HIGHEST_NO_PAIR_HAND = ultraFastEvaluator( "As Kd Qh Jc 9d 5s 2c" );
HIGHEST_FULL_HOUSE = ultraFastEvaluator( "As Ac Ad Kh Ks 8s 2h" );
I'm of course not advocating to use strings here. It's just an example...
You could then, for each hand you want to find the actual five best:
compute the strength using the fast evaluator
is it <= HIGHEST_NO_PAIR_HAND ?
yes: take five highest cards
no: is it <= HIGHEST_ONE_PAIR_HAND ?
yes: take highest pair + highest three cards
no: is it <= HIGHEST_TWO_PAIRS_HAND?
etc.
So in my opinion you could reuse an API that directly finds the five best out of seven or entirely rewrite your own, but it's going to faster if you use the fast evaluator's result to then simplify your logic.
EDIT note that there's not necessarily one way to do "five best out of seven". For example with As Ac on a Kc Kd Qh Qs 2c board, both "As Ac Kc Kd Qh" and "As Ac Kc Kd Qs" are "five best" (the last queen's suit doesn't matter).
No, it's not possible to extract that information. The lookup table contains only the equivalence values, which are broken into hand type and rank; no other information is preserved.
If you need to evaluate millions of hands per second and get the winning hand for each, instead of just the rank, you'll need to use a different evaluator. If you only need to extract the winning hand rarely, you could use this evaluator, and resort to a slower method to find the best 5 cards when necessary.
Old post but I'll give it a shot. If you're using a table lookup (e.g., the 7-card arrays mentioned above, aka the Ray Wotton method), construct a second table with your target information in the same slot positions. Example: I ended up in slot 167,452 to find my eval, now I'll look at my other array in slot 167,452 to find my 5-card hand.
A single card can be represented by 6 bits - 2 for the suit and 4 for the rank. 30 bits would give you the entire 5-card hand. Maybe not quite that simple, but that's the general idea. I've used this exact technique for some stuff I did awhile back.
Alternatively, you could pass in all the 7-choose-5-card combinations (21 of 'em I believe) and find out which matches the original eval.
The twoplustwo hand evaluator can evaluate five cards hands. Here's the code for that in C#:
int LookupFiveCardHand(int[] cards) {
//assert cards size is 5
int p = HR[53 + cards[i++]];
p = HR[p + cards[i++]];
p = HR[p + cards[i++]];
p = HR[p + cards[i++]];
p = HR[p + cards[i++]];
return HR[p];
}
Notice there's 6 array look-ups despite there being 5 cards.
Anyways, since the evaluator is so fast, you can just compare every possible 5 card combination. A hand containing 7 card hands will have twenty-one 5 card combinations. Code in C#:
List<int> GetBestFiveCards(List<int> sevenCardHand) {
List<List<int>> fiveCardHandCombos = new List<List<int>>();
// adds all combinations of five cards to fiveCardHandCombos
for (int i = 0; i < sevenCardHand.Count; i++) {
for (int j = i+1; j < sevenCardHand.Count; j++) {
List<int> fiveCardCombo = new List<int>(sevenCardHand);
fiveCardHandCombos.RemoveAt(j); // j > i, so remove j first
fiveCardHandCombos.RemoveAt(i);
fiveCardHandCombos.Add(fiveCardCombo);
}
}
Dictionary<List<int>, int> comboToValue = new Dictionary<List<int>, int>();
for (int i = 0; i < fiveCardHandCombos.Count; i++) {
comboToValue.Add(fiveCardHandCombos[i], LookupFiveCardHand(fiveCardHandCombos[i]));
}
int maxValue = comboToValue.Values.Max();
return comboToValue.Where(x => x.Value == maxValue).Select(x => x.Key).First(); //grab only the first combo in the event there is a tie
}

Adaptive IO Optimization Problem

Here is an interesting optimization problem that I think about for some days now:
In a system I read data from a slow IO device. I don't know beforehand how much data I need. The exact length is only known once I have read an entire package (think of it as it has some kind of end-symbol). Reading more data than required is not a problem except that it wastes time in IO.
Two constrains also come into play: Reads are very slow. Each byte I read costs. Also each read-request has a constant setup cost regardless of the number of bytes I read. This makes reading byte by byte costly. As a rule of thumb: the setup costs are roughly as expensive as a read of 5 bytes.
The packages I read are usually between 9 and 64 bytes, but there are rare occurrences larger or smaller packages. The entire range will be between 1 to 120 bytes.
Of course I know a little bit of my data: Packages come in sequences of identical sizes. I can classify three patterns here:
Sequences of reads with identical sizes:
A A A A A ...
Alternating sequences:
A B A B A B A B ...
And sequences of triples:
A B C A B C A B C ...
The special case of degenerated triples exist as well:
A A B A A B A A B ...
(A, B and C denote some package size between 1 and 120 here).
Question:
Based on the size of the previous packages, how do I predict the size of the next read request? I need something that adapts fast, uses little storage (lets say below 500 bytes) and is fast from a computational point of view as well.
Oh - and pre-generating some tables won't work because the statistic of read sizes can vary a lot with different devices I read from.
Any ideas?
You need to read at least 3 packages and at most 4 packages to identify the pattern.
Read 3 packages. If they are all same size, then the pattern is AAAAAA...
If they are all not the same size, read the 4th package. If 1=3 & 2=4, pattern is ABAB. Otherwise, pattern is ABCABC...
With that outline, it is probably a good idea to do a speculative read of 3 package sizes (something like 3*64 bytes at a single go).
I don't see a problem here.. But first, several questions:
1) Can you read the input asyncronously (e.g. separate thread, interrupt routine, etc)?
2) Do you have some free memory for a buffer?
3) If you've commanded a longer read, are you able to obtain first byte(s) before the whole packet is read?
If so (and I think in most cases it can be implemented), then you can just have a separate thread that reads them at highest possible speed and stores them in a buffer, with stalling when the buffer gets full, so that you normal process can use a synchronous getc() on that buffer.
EDIT: I see.. it's because of CRC or encryption? Well, then you could use some ideas from data compression:
Consider a simple adaptive algorithm of order N for M possible symbols:
int freqs[M][M][M]; // [a][b][c] : occurences of outcome "c" when prev vals were "a" and "b"
int prev[2]; // some history
int predict(){
int prediction = 0;
for (i = 1; i < M; i++)
if (freqs[prev[0]][prev[1]][i] > freqs[prev[0]][prev[1]][prediction])
prediction = i;
return prediction;
};
void add_outcome(int val){
if (freqs[prev[0]][prev[1]][val]++ > DECAY_LIMIT){
for (i = 0; i < M; i++)
freqs[prev[0]][prev[1]][i] >>= 1;
};
pred[0] = pred[1];
pred[1] = val;
};
freqs has to be an array of order N+1, and you have to remember N previsous values. N and DECAY_LIMIT have to be adjusted according to the statistics of the input. However, even they can be made adaptive (for example, if it producess too many misses, then the decay limit can be shortened).
The last problem would be the alphabet. Depending on the context, if there are several distinct sizes, you can create a one-to-one mapping to your symbols. If more, then you can use quantitization to limit the number of symbols. The whole algorithm can be written with pointer arithmetics, so that N and M won't be hardcoded.
Since reading is so slow, I suppose you can throw some CPU power at it so you can try to make an educated guess of how much to read.
That would be basically a predictor, that would have a model based on probabilities. It would generate a sample of predictions of the upcoming message size, and the cost of each. Then pick the message size that has the best expected cost.
Then when you find out the actual message size, use Bayes rule to update the model probabilities, and do it again.
Maybe this sounds complicated, but if the probabilities are stored as fixed-point fractions you won't have to deal with floating-point, so it may be not much code. I would use something like a Metropolis-Hastings algorithm as my basic simulator and bayesian update framework. (This is just an initial stab at thinking about it.)

Resources