Generate random binaries with avg hamming distance of 50%? - random

I want to generate binaries, where the average hamming distance between the items is around 50%.
The second condition is that the distance should not fall below ~40% or go above ~60%.
Another complication is that the items are not generated in sequence, but once in a while and I don't want to loop over all of the items to check and regenerate, because it will become a slow process after a while.
Is there a mechanism or algorithm to achieve this ?
Currently I use the following code :
def rand(size):
op = np.random.uniform()
return np.random.choice([0,1], size=size, p=[op, 1-op] )
but it breaks even when I generate 10 items f.e. (hamming dist) :
[ [ 0 2510 8209 4305 3896 1619 7231 6356 8103 3265]
[2510 0 8131 4347 3940 1697 7219 6334 8037 3305]
[8209 8131 0 5858 6449 9312 2100 3317 1030 7196]
[4305 4347 5858 0 4661 4088 5598 5311 5764 4590]
[3896 3940 6449 4661 0 3485 6093 5650 6385 4251]
[1619 1697 9312 4088 3485 0 8034 6739 9152 2716]
[7231 7219 2100 5598 6093 8034 0 3831 2238 6510]
[6356 6334 3317 5311 5650 6739 3831 0 3405 5933]
[8103 8037 1030 5764 6385 9152 2238 3405 0 7112]
[3265 3305 7196 4590 4251 2716 6510 5933 7112 0]]
min: 1030
Avg distance : 0.470624%
btw. binaries are 10 000 bits.

So far the following solution seems to behave as to my expectation.
def rand(size):
return [np.random.randint(0,2) for _ in xrange(size)]
will update when I do more extensive tests, if it is ok.

Related

Description ENGINE LOG in IBM ILOG CPLEX

I want to understand engine log of IBM ILOG CPLEX studios for a ILP model. I have checked there documentation also but could not able to get clear idea.
Example of Engine log :
Version identifier: 22.1.0.0 | 2022-03-09 | 1a383f8ce
Legacy callback pi
Tried aggregator 2 times.
MIP Presolve eliminated 139 rows and 37 columns.
MIP Presolve modified 156 coefficients.
Aggregator did 11 substitutions.
Reduced MIP has 286 rows, 533 columns, and 3479 nonzeros.
Reduced MIP has 403 binaries, 0 generals, 0 SOSs, and 129 indicators.
Presolve time = 0.05 sec. (6.16 ticks)
Found incumbent of value 233.000000 after 0.07 sec. (9.40 ticks)
Probing time = 0.00 sec. (1.47 ticks)
Tried aggregator 2 times.
Detecting symmetries...
Aggregator did 2 substitutions.
Reduced MIP has 284 rows, 531 columns, and 3473 nonzeros.
Reduced MIP has 402 binaries, 129 generals, 0 SOSs, and 129 indicators.
Presolve time = 0.01 sec. (2.87 ticks)
Probing time = 0.00 sec. (1.45 ticks)
Clique table members: 69.
MIP emphasis: balance optimality and feasibility.
MIP search method: dynamic search.
Parallel mode: deterministic, using up to 8 threads.
Root relaxation solution time = 0.00 sec. (0.50 ticks)
Nodes Cuts/
Node Left Objective IInf Best Integer Best Bound ItCnt Gap
* 0+ 0 233.0000 18.0000 92.27%
* 0+ 0 178.0000 18.0000 89.89%
* 0+ 0 39.0000 18.0000 53.85%
0 0 22.3333 117 39.0000 22.3333 4 42.74%
0 0 28.6956 222 39.0000 Cuts: 171 153 26.42%
0 0 31.1543 218 39.0000 Cuts: 123 251 20.12%
0 0 32.1544 226 39.0000 Cuts: 104 360 17.55%
0 0 32.6832 212 39.0000 Cuts: 102 456 16.20%
0 0 33.1524 190 39.0000 Cuts: 65 521 14.99%
Detecting symmetries...
0 0 33.3350 188 39.0000 Cuts: 66 566 14.53%
0 0 33.4914 200 39.0000 Cuts: 55 614 14.12%
0 0 33.6315 197 39.0000 Cuts: 47 673 13.77%
0 0 33.6500 207 39.0000 Cuts: 61 787 13.72%
0 0 33.7989 206 39.0000 Cuts: 91 882 13.34%
* 0+ 0 38.0000 33.7989 11.06%
0 0 33.9781 209 38.0000 Cuts: 74 989 10.58%
0 0 34.0074 209 38.0000 Cuts: 65 1043 10.51%
0 0 34.2041 220 38.0000 Cuts: 63 1124 9.99%
0 0 34.2594 211 38.0000 Cuts: 96 1210 9.84%
0 0 34.3032 216 38.0000 Cuts: 86 1274 9.73%
0 0 34.3411 211 38.0000 Cuts: 114 1353 9.63%
0 0 34.3420 220 38.0000 Cuts: 82 1402 9.63%
0 0 34.3709 218 38.0000 Cuts: 80 1462 9.55%
0 0 34.4494 228 38.0000 Cuts: 87 1530 9.34%
0 0 34.4882 229 38.0000 Cuts: 97 1616 9.24%
0 0 34.5173 217 38.0000 Cuts: 72 1663 9.16%
0 0 34.5545 194 38.0000 Cuts: 67 1731 9.07%
0 0 34.5918 194 38.0000 Cuts: 76 1786 8.97%
0 0 34.6094 199 38.0000 Cuts: 73 1840 8.92%
0 0 34.6226 206 38.0000 Cuts: 77 1883 8.89%
0 0 34.6421 206 38.0000 Cuts: 53 1928 8.84%
0 0 34.6427 213 38.0000 Cuts: 84 1982 8.83%
Detecting symmetries...
0 2 34.6427 213 38.0000 34.6478 1982 8.82%
Elapsed time = 0.44 sec. (235.86 ticks, tree = 0.02 MB, solutions = 4)
GUB cover cuts applied: 32
Cover cuts applied: 328
Implied bound cuts applied: 205
Flow cuts applied: 11
Mixed integer rounding cuts applied: 17
Zero-half cuts applied: 35
Gomory fractional cuts applied: 1
Root node processing (before b&c):
Real time = 0.43 sec. (235.61 ticks)
Parallel b&c, 8 threads:
Real time = 0.27 sec. (234.23 ticks)
Sync time (average) = 0.11 sec.
Wait time (average) = 0.00 sec.
------------
Total (root+branch&cut) = 0.71 sec. (469.84 ticks)
Mainly I want to understand what are nodes,left,gap,root node processing, parallel b&c.
I hope anyone of you will give a resource or explain it clearly so that it can be helpful when someone starts using IBM ILOG CPLEX studio in future
Thanks a lot in advance
I am expecting for someone to fill knowledge gaps regarding Engine log of IBMs ILOG CPLEX studio
I recommend
Progress reports: interpreting the node log
https://www.ibm.com/docs/en/icos/12.8.0.0?topic=mip-progress-reports-interpreting-node-log

Obtain a different result when evaluating Stanford NLP sentiment

I downloaded Stanford NLP 3.5.2 and run sentiment analysis with default configuration (i.e. I did not change anything, just unzip and run).
java -cp "*" edu.stanford.nlp.sentiment.Evaluate -model edu/stanford/nlp/models/sentiment/sentiment.ser.gz -treebank test.txt
EVALUATION SUMMARY
Tested 82600 labels
66258 correct
16342 incorrect
0.802155 accuracy
Tested 2210 roots
976 correct
1234 incorrect
0.441629 accuracy
Label confusion matrix
Guess/Gold 0 1 2 3 4 Marg. (Guess)
0 323 161 27 3 3 517
1 1294 5498 2245 652 148 9837
2 292 2993 51972 2868 282 58407
3 99 602 2283 7247 2140 12371
4 0 1 21 228 1218 1468
Marg. (Gold) 2008 9255 56548 10998 3791
0 prec=0.62476, recall=0.16086, spec=0.99759, f1=0.25584
1 prec=0.55891, recall=0.59406, spec=0.94084, f1=0.57595
2 prec=0.88982, recall=0.91908, spec=0.75299, f1=0.90421
3 prec=0.58581, recall=0.65894, spec=0.92844, f1=0.62022
4 prec=0.8297, recall=0.32129, spec=0.99683, f1=0.46321
Root label confusion matrix
Guess/Gold 0 1 2 3 4 Marg. (Guess)
0 44 39 9 0 0 92
1 193 451 190 131 36 1001
2 23 62 82 30 8 205
3 19 81 101 299 255 755
4 0 0 7 50 100 157
Marg. (Gold) 279 633 389 510 399
0 prec=0.47826, recall=0.15771, spec=0.97514, f1=0.2372
1 prec=0.45055, recall=0.71248, spec=0.65124, f1=0.55202
2 prec=0.4, recall=0.2108, spec=0.93245, f1=0.27609
3 prec=0.39603, recall=0.58627, spec=0.73176, f1=0.47273
4 prec=0.63694, recall=0.25063, spec=0.96853, f1=0.35971
Approximate Negative label accuracy: 0.646009
Approximate Positive label accuracy: 0.732504
Combined approximate label accuracy: 0.695110
Approximate Negative root label accuracy: 0.797149
Approximate Positive root label accuracy: 0.774477
Combined approximate root label accuracy: 0.785832
The test.txt file is downloaded from http://nlp.stanford.edu/sentiment/trainDevTestTrees_PTB.zip (contains train.txt, dev.txt and test.txt). The download link is get from http://nlp.stanford.edu/sentiment/code.html
However, in the paper "Socher, R., Perelygin, A., Wu, J.Y., Chuang, J., Manning, C.D., Ng, A.Y. and Potts, C., 2013, October. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the conference on empirical methods in natural language processing (EMNLP) (Vol. 1631, p. 1642)." which sentiment analysis tool is based on, the authors reported that the accuracy when classify 5 classes is 0.807.
Is my results I obtained normal?
I get the same results when I run it out of the box. It would not surprise me if the version of their system they made for Stanford CoreNLP differs slightly from the version in the paper.

How to calculate Total average response time

Below are the results
sampler_label count average median 90%_line min max
Transaction1 2 61774 61627 61921 61627 61921
Transaction2 4 82 61 190 15 190
Transaction3 4 1862 1317 3612 1141 3612
Transaction4 4 1242 915 1602 911 1602
Transaction5 4 692 608 906 423 906
Transaction6 4 2764 2122 4748 1182 4748
Transaction7 4 9369 9029 11337 7198 11337
Transaction8 4 1245 890 2168 834 2168
Transaction9 4 3475 2678 4586 2520 4586
TOTAL 34 6073 1381 9913 15 61921
My question here is how is total average response time is being calculated (which is 6073)?
Like in my result I want to exclude transaction1 response time and then want to calculate Total average response time.
How can I do that?
Total Avg Response time = ((s1*t1) + (s2*t2)...)/s
s1 = No of times transaction 1 was executed
t1 = Avg response time for transaction 1
s2 = No of times transaction 2 was executed
t2 = Avg response time for transaction 2
s = Total no of samples (s1+s2..)
In your case, except transaction1 all other transactions have been executed 4 times. So, simple avg of (82, 1862, 1242...) should give the result you wanted.

Replacing selective numbers with NaNs

I have eight columns of data. Colulmns 1,3, 5 and 7 contain 3-digit numbers. Columns 2,4,6 and 8 contain 1s and zeros and correspond to 1, 3, 5 and 7 respectively. Where there is a zero in an even column I want to change the corresponding number to NaN. More simply, if it were
155 1 345 0
328 1 288 1
884 0 145 0
326 1 332 1
159 0 186 1
then 884 would be replaced with NaN, as would 159, 345 and 145 with the other numbers remaining the same. I need to use NaN to maintain the data in matrix form.
I know I could use
data(3,1)=Nan; data(5,1)=Nan
etc but this is very time consuming. Any suggestions would be very welcome.
Approach 1
a1 = [
155 1 345 0
328 1 288 1
884 0 145 0
326 1 332 1
159 0 186 1]
t1 = a1(:,[2:2:end])
data1 = a1(:,[1:2:end])
t1(t1==0)=NaN
t1(t1==1)=data1(t1==1)
a1(:,[1:2:end]) = t1
Output -
a1 =
155 1 NaN 0
328 1 288 1
NaN 0 NaN 0
326 1 332 1
NaN 0 186 1
Approach 2
[x1,y1] = find(~a1(:,[2:2:end]))
a1(sub2ind(size(a1),x1,2*y1-1)) = NaN
I would split the problem into two matrices, with one being a logical mask, the other holding your data.
data = your_mat(:,1:2:end);
valid = your_mat(:,2:2:end);
Then you can simply do:
data(~valid)=NaN;
You could then rebuild your data by doing:
your_mat(:,1:2:end) = data;
Here is an interesting solution, I would expect it to perform quite well, but be aware that it is a bit tricky!
data(~data(:,2:end))=NaN
Using logical indexing:
even = a1(:,2:2:end); % even columns
odd = a1(:,1:2:end); % odd columns
odd(even == 0) = NaN; % set odd columns to NaN if corresponding col is 0
a1(:,1:2:end) = odd; % assign back to a1
a1 =
155 1 NaN 0
328 1 288 1
NaN 0 NaN 0
326 1 332 1
NaN 0 186 1
Here is an alternative solution. You can use circshift, in the following manner.
First create a mask of the even columns of the same size of your input matrix A:
AM = false(size(A)); AM(:,2:2:end) = true;
Then circshift the mask (A==0)&AM one element to the left, to shift this mask on the odd columns.
A(circshift((A==0)&AM,[0 -1])) = nan;
NOTE: I've searched for a one-liner ... I don't think it's a good one, but here is one you can use, based on my solution:
A(circshift(bsxfun(#and, A==0, mod(0:size(A,2)-1,2)),[0 -1])) = nan;
The dirty thing with bsxfun is to create on-line the mask AM. I use for that the oddness test on a vector of indices, bsxfun extends it over the whole matrix A. You can do anything else to create this mask, of course.

How to substitute a for-loop with vecorization acting several thousand times per data.frame row?

Being still quite wet behind the ears concerning R and - more important - vectorization, I cannot get my head around how to speed up the code below.
The for-loop calculates a number of seeds falling onto a road for several road segments with different densities of seed-generating plants by applying a random propability for every seed.
As my real data frame has ~200k rows and seed numbers are up to 300k/segment, using the example below would take several hours on my current machine.
#Example data.frame
df <- data.frame(Density=c(0,0,0,3,0,120,300,120,0,0))
#Example SeedRain vector
SeedRainDists <- c(7.72,-43.11,16.80,-9.04,1.22,0.70,16.48,75.06,42.64,-5.50)
#Calculating the number of seeds from plant densities
df$Seeds <- df$Density * 500
#Applying a probability of reaching the road for every seed
df$SeedsOnRoad <- apply(as.matrix(df$Seeds),1,function(x){
SeedsOut <- 0
if(x>0){
#Summing up the number of seeds reaching a certain distance
for(i in 1:x){
SeedsOut <- SeedsOut +
ifelse(sample(SeedRainDists,1,replace=T)>40,1,0)
}
}
return(SeedsOut)
})
If someone might give me a hint as to how the loop could be substituted by vectorization - or maybe how the data could be organized better in the first place to improve performance - I would be very grateful!
Edit: Roland's answer showed that I may have oversimplified the question. In the for-loop I extract a random value from a distribution of distances recorded by another author (that's why I can't supply the data here). Added an exemplary vector with likely values for SeedRain distances.
This should do about the same simulation:
df$SeedsOnRoad2 <- sapply(df$Seeds,function(x){
rbinom(1,x,0.6)
})
# Density Seeds SeedsOnRoad SeedsOnRoad2
#1 0 0 0 0
#2 0 0 0 0
#3 0 0 0 0
#4 3 1500 892 877
#5 0 0 0 0
#6 120 60000 36048 36158
#7 300 150000 90031 89875
#8 120 60000 35985 35773
#9 0 0 0 0
#10 0 0 0 0
One option is generate the sample() for all Seeds per row of df in a single go.
Using set.seed(1) before your loop-based code I get:
> df
Density Seeds SeedsOnRoad
1 0 0 0
2 0 0 0
3 0 0 0
4 3 1500 289
5 0 0 0
6 120 60000 12044
7 300 150000 29984
8 120 60000 12079
9 0 0 0
10 0 0 0
I get the same answer in a fraction of the time if I do:
set.seed(1)
tmp <- sapply(df$Seeds,
function(x) sum(sample(SeedRainDists, x, replace = TRUE) > 40)))
> tmp
[1] 0 0 0 289 0 12044 29984 12079 0 0
For comparison:
df <- transform(df, GavSeedsOnRoad = tmp)
df
> df
Density Seeds SeedsOnRoad GavSeedsOnRoad
1 0 0 0 0
2 0 0 0 0
3 0 0 0 0
4 3 1500 289 289
5 0 0 0 0
6 120 60000 12044 12044
7 300 150000 29984 29984
8 120 60000 12079 12079
9 0 0 0 0
10 0 0 0 0
The points to note here are:
try to avoid calling a function repeatedly in a loop if you the function is vectorised or can generate the entire end result with a single call. Here you were calling sample() Seeds times for each row of df, each call returning a single sample from SeedRainDists. Here I do a single sample() call asking for sample size Seeds, for each row of df - hence I call sample 10 times, your code called it 271500 times.
even if you have to repeatedly call a function in a loop, remove from the loop anything that is vectorised that could be done on the entire result after the loop is done. An example here is your accumulating of SeedsOut, which is calling +() a large number of times.
Better would have been to collect each SeedsOut in a vector, and then sum() that vector outside the loop. E.g.
SeedsOut <- numeric(length = x)
for(i in seq_len(x)) {
SeedsOut[i] <- ifelse(sample(SeedRainDists,1,replace=TRUE)>40,1,0)
}
sum(SeedOut)
Note that R treats a logical as if it were numeric 0s or 1s where used in any mathematical function. Hence
sum(ifelse(sample(SeedRainDists, 100, replace=TRUE)>40,1,0))
and
sum(sample(SeedRainDists, 100, replace=TRUE)>40)
would give the same result if run with the same set.seed().
There may be a fancier way of doing the sampling requiring fewer calls to sample() (and there is, sample(SeedRainDists, sum(Seeds), replace = TRUE) > 40 but then you need to take care of selecting the right elements of that vector for each row of df - not hard, just a light cumbersome), but what i show may be efficient enough?

Resources