I have the following python code:
from z3 import *
import time
s = Solver()
p = Array("p", BitVecSort(11), BitVecSort(8))
for m in range(200):
start_time = time.time()
for i in range(20):
s.add(Or([p[m*20 + i] % 72 == BitVecVal(x, 8) for x in [k * 6 for k in [0,2,4,5,7,9,11]]]))
s.check()
model = s.model()
pre_push = time.time()
s.push()
push_time = time.time() - pre_push
if m % 5 == 0:
print((m, (time.time() - start_time - push_time)))
As can be seen from the code, during every cycle m I put constraints on values m*20 to m*20+19 of my array. Therefore, there is never an added constraint which involves a variable from a previous valuation of m. However, even disregarding the time it takes to do ``s.push’’, z3 still slows enormously most rounds:
Output:
(0, 0.23836421966552734)
(5, 1.3699274063110352)
(10, 4.132023096084595)
(15, 3.884359836578369)
(20, 4.81259298324585)
(25, 7.442332029342651)
(30, 12.25448989868164)
(35, 15.96577787399292)
(40, 16.90854024887085)
(45, 22.725850105285645)
(50, 29.525628328323364)
(55, 23.494187355041504)
(60, 31.887953996658325)
My intuition would be that push would save the values of the previous model, and each model has the same number of constraints added to previously unconstrained variables, so besides the time It takes to push the time for each cycle should be relatively similar. Why am I getting so much slowdown?
Related
I have a loop which generates random samples from the chi-square distribution with different degrees of freedom (df1, df2, df3, df4) and saves it to a cell array:
for k=1:N
x{k} = chi2rnd([df1 df2 df3 df4]);
end
Is there any way to do this without any iterations? I tried to use cellfun, but it didn't work.
Here is a vectorized way (method 1):
x = num2cell(chi2rnd(repmat([df1 df2 df3 df4], N, 1), N, 4), 2);
You may also try this method (method 2):
df = [df1 df2 df3 df4];
y = zeros(N,numel(df));
for k = 1:numel(df)
y(:,k) = chi2rnd(df(k),N,1);
end
x = num2cell (y,2);
Result of timing for N = 10000 in Octave. However you need to measure the time in MATLAB:
Original solution : 3.91095 seconds
Vectorized solution (method 1): 0.0691321 seconds
Loop solution (method 2) : 0.0124869 seconds
I found this problem in a programming forum Ohjelmointiputka:
https://www.ohjelmointiputka.net/postit/tehtava.php?tunnus=ahdruu and
https://www.ohjelmointiputka.net/postit/tehtava.php?tunnus=ahdruu2
Somebody said that there is a solution found by a computer, but I was unable to find a proof.
Prove that there is a matrix with 117 elements containing the digits such that one can read the squares of the numbers 1, 2, ..., 100.
Here read means that you fix the starting position and direction (8 possibilities) and then go in that direction, concatenating the numbers. For example, if you can find for example the digits 1,0,0,0,0,4 consecutively, you have found the integer 100004, which contains the square numbers of 1, 2, 10, 100 and 20, since you can read off 1, 4, 100, 10000, and 400 (reversed) from that sequence.
But there are so many numbers to be found (100 square numbers, to be precise, or 81 if you remove those that are contained in another square number with total 312 digits) and so few integers in a matrix that you have to put all those square numbers so densely that finding such a matrix is difficult, at least for me.
I found that if there is such a matrix mxn, we may assume without loss of generalty that m<=n. Therefore, the matrix must be of the type 1x117, 3x39 or 9x13. But what kind of algorithm will find the matrix?
I have managed to do the program that checks if numbers to be added can be put on the board. But how can I implemented the searching algorithm?
# -*- coding: utf-8 -*-
# Returns -1 if can not put and value how good a solution is if can be put. Bigger value of x is better.
def can_put_on_grid(grid, number, start_x, start_y, direction):
# Check that the new number lies inside the grid.
x = 0
if start_x < 0 or start_x > len(grid[0]) - 1 or start_y < 0 or start_y > len(grid) - 1:
return -1
end = end_coordinates(number, start_x, start_y, direction)
if end[0] < 0 or end[0] > len(grid[0]) - 1 or end[1] < 0 or end[1] > len(grid) - 1:
return -1
# Test if new number does not intersect any previous number.
A = [-1,-1,-1,0,0,1,1,1]
B = [-1,0,1,-1,1,-1,0,1]
for i in range(0,len(number)):
if grid[start_x + A[direction] * i][start_y + B[direction] * i] not in ("X", number[i]):
return -1
else:
if grid[start_x + A[direction] * i][start_y + B[direction] * i] == number[i]:
x += 1
return x
def end_coordinates(number, start_x, start_y, direction):
end_x = None
end_y = None
l = len(number)
if direction in (1, 4, 7):
end_x = start_x - l + 1
if direction in (3, 6, 5):
end_x = start_x + l - 1
if direction in (2, 0):
end_x = start_x
if direction in (1, 2, 3):
end_y = start_y - l + 1
if direction in (7, 0, 5):
end_y = start_y + l - 1
if direction in (4, 6):
end_y = start_y
return (end_x, end_y)
if __name__ == "__main__":
A = [['X' for x in range(13)] for y in range(9)]
numbers = [str(i*i) for i in range(1, 101)]
directions = [0, 1, 2, 3, 4, 5, 6, 7]
for i in directions:
C = can_put_on_grid(A, "10000", 3, 5, i)
if C > -1:
print("One can put the number to the grid!")
exit(0)
I also found think that brute force search or best first search is too slow. I think there might be a solution using simulated annealing, genetic algorithm or bin packing algorithm. I also wondered if one can apply Markov chains somehow to find the grid. Unfortunately those seems to be too hard for me to implemented at current skills.
There is a program for that in https://github.com/minkkilaukku/square-packing/blob/master/sqPackMB.py . Just change M=9, N=13 from the lines 20 and 21.
I don't fully understand how the triangular inequality is used to optimise distance calculations in KNN classification.
I had written a python script referring the steps mentioned below
Calculate the distance between each training pixel to the other.
For each test sample
Calculate the distance from the first training sample as dn. This would be the current minimum distance.
Calculate the distance from the second training sample(p) as dp.
If dp < dn assign dn =dp
For each remaining training sample(c)
If distance between the sample c and sample p measured as dcp meets
dp - dn < dcp < dp + dn
Calculate distance from test sample to the sample c as dp
If dp < dn, assign: dn = dp
Else, skip this training sample.
Stop if there are no more training samples
The class to which n belongs is the estimate.
Python Script:
def get_distance(p1 = (0, 0), p2 = (0, 0)):
return abs(p1[0] - p2[0]) + abs(p1[1] - p2[1])
def algorithm(train_set, new_point):
d_n = get_distance(new_point, train_set[0])
d_p = get_distance(new_point, train_set[1])
min_index = 0
if d_p < d_n:
d_n = d_p
min_index = 1
for c in range(2, len(train_set)):
dcp = get_distance(train_set[min_index], train_set[c])
if d_p - d_n < dcp < d_p + d_n:
d_p = get_distance(new_point, train_set[c])
if d_p < d_n:
d_n = d_p
min_index = c
print(train_set[min_index], d_n)
train_set = [
(0, 1, 'A'),
(1, 1, 'A'),
(2, 5, 'B'),
(1, 8, 'A'),
(5, 3, 'C'),
(4, 2, 'C'),
(3, 2, 'A'),
(1, 7, 'B'),
(4, 8, 'B'),
(4, 0, 'A'),
]
for new_point in train_set:
# Checking the distances from the points within training set iteself: min distance = 0, used for validation
result_point = min(train_set, key = lambda x : get_distance(x, new_point))
print(result_point, get_distance(result_point, new_point))
algorithm(train_set, new_point)
print('----------')
But it doesn't give the required result for 1 point.
Is my understanding of the optimization wrong?
Thank you in advance for any help.
I have a system of inequalities and constraints:
Let A=[F1,F2,F3,F4,F5,F6] where F1 through F6 are given.
Let B=[a,b,c,d,e,f] where a<=b<=c<=d<=e<=f.
Let C=[u,v,w,x,y,z] where u<=v<=w<=x<=y<=z.
Equation 1: if(a>F1, 1, 0) + if(a>F2, 1, 0) + ... + if(f>F6, 1, 0) > 18
Equation 2: if(u>a, 1, 0) + if(u>b, 1, 0) + ... + if (z>f, 1, 0) > 18
Equation 3: if(F1>u, 1, 0) + if(F1>v, 1, 0) + ... + if(F6>z, 1, 0) > 18
Other constraints: All variables must be integers between 1 and N (N is given).
I wish to merely count the number of integer solutions to my variables (I do not wish to actually solve them). I know how to use solvers to calculate systems of equations in matrices but this usually assumes those equations use = as opposed to >=, >, <, or <=.
Here's a stab at it.
This is horribly inefficient, as I compute the Cartesian product of the two vectors, then compare each tuple combination. This also won't scale past 2 dimensions.
Also, I'm worried this isn't exactly what you are looking for, because I'm solving each equation independently. If you're looking for all the integer values that satisfy a 3-dimensional space bound by the system of inequalities, well, that's a bit of a brain bender for me, albeit very interesting.
Python anyone?
#sample data
A =[12,2,15,104,54,20]
B =[10,20,30,40,50,60]
C =[100,200,300,400,500,600]
import itertools
def eq1():
product = itertools.product(B,A) #construct Cartesian product of 2 lists
#list(product) returns a Cartesian product of tuples
# [(12, 10), (12, 20), (12, 30)... (2, 10), (2, 20)... (20, 60)]
#now, use a list comprehension to compare the values in each tuple,
# generating a list of only those that satisfy the inequality...
# then return the length of that list - which is the count
return len([ Bval for Bval, Aval in list(product) if Bval > Aval])
def eq2():
product = itertools.product(C,B)
return len([ Cval for Cval, Bval in list(product) if Cval>Bval])
def eq3():
product = itertools.product(A,C)
return len([ Aval for Aval, Cval in list(product) if Aval>Cval])
print eq1()
print eq2()
print eq3()
This sample data returns:
eq1 : 21
eq2 : 36
eq3 : 1
But doesn't know how to combine these answers into a single integer count of all 3 - there's some kind of union that's going to happen between the lists.
My sanity test is in equation 3, which returns '1' - because only when Aval = 104 does it satisfy Aval > Cval for Cval only at 100.
I've read a bunch of tutorials about the proper way to generate a logarithmic distribution of tagcloud weights. Most of them group the tags into steps. This seems somewhat silly to me, so I developed my own algorithm based on what I've read so that it dynamically distributes the tag's count along the logarthmic curve between the threshold and the maximum. Here's the essence of it in python:
from math import log
count = [1, 3, 5, 4, 7, 5, 10, 6]
def logdist(count, threshold=0, maxsize=1.75, minsize=.75):
countdist = []
# mincount is either the threshold or the minimum if it's over the threshold
mincount = threshold<min(count) and min(count) or threshold
maxcount = max(count)
spread = maxcount - mincount
# the slope of the line (rise over run) between (mincount, minsize) and ( maxcount, maxsize)
delta = (maxsize - minsize) / float(spread)
for c in count:
logcount = log(c - (mincount - 1)) * (spread + 1) / log(spread + 1)
size = delta * logcount - (delta - minsize)
countdist.append({'count': c, 'size': round(size, 3)})
return countdist
Basically, without the logarithmic calculation of the individual count, it would generate a straight line between the points, (mincount, minsize) and (maxcount, maxsize).
The algorithm does a good approximation of the curve between the two points, but suffers from one drawback. The mincount is a special case, and the logarithm of it produces zero. This means the size of the mincount would be less than minsize. I've tried cooking up numbers to try to solve this special case, but can't seem to get it right. Currently I just treat the mincount as a special case and add "or 1" to the logcount line.
Is there a more correct algorithm to draw a curve between the two points?
Update Mar 3: If I'm not mistaken, I am taking the log of the count and then plugging it into a linear equation. To put the description of the special case in other words, in y=lnx at x=1, y=0. This is what happens at the mincount. But the mincount can't be zero, the tag has not been used 0 times.
Try the code and plug in your own numbers to test. Treating the mincount as a special case is fine by me, I have a feeling it would be easier than whatever the actual solution to this problem is. I just feel like there must be a solution to this and that someone has probably come up with a solution.
UPDATE Apr 6: A simple google search turns up a many of the tutorials I've read, but this is probably the most complete example of stepped tag clouds.
UPDATE Apr 28: In response to antti.huima's solution: When graphed, the curve that your algorithm creates lies below the line between the two points. I've been trying to juggle the numbers around but still can't seem to come up with a way to flip that curve to the other side of the line. I'm guessing that if the function was changed to some form of logarithm instead of an exponent it would do exactly what I'd need. Is that correct? If so, can anyone explain how to achieve this?
Thanks to antti.huima's help, I re-thought out what I was trying to do.
Taking his method of solving the problem, I want an equation where the logarithm of the mincount is equal to the linear equation between the two points.
weight(MIN) = ln(MIN-(MIN-1)) + min_weight
min_weight = ln(1) + min_weight
While this gives me a good starting point, I need to make it pass through the point (MAX, max_weight). It's going to need a constant:
weight(x) = ln(x-(MIN-1))/K + min_weight
Solving for K we get:
K = ln(MAX-(MIN-1))/(max_weight - min_weight)
So, to put this all back into some python code:
from math import log
count = [1, 3, 5, 4, 7, 5, 10, 6]
def logdist(count, threshold=0, maxsize=1.75, minsize=.75):
countdist = []
# mincount is either the threshold or the minimum if it's over the threshold
mincount = threshold<min(count) and min(count) or threshold
maxcount = max(count)
constant = log(maxcount - (mincount - 1)) / (maxsize - minsize)
for c in count:
size = log(c - (mincount - 1)) / constant + minsize
countdist.append({'count': c, 'size': round(size, 3)})
return countdist
Let's begin with your mapping from the logged count to the size. That's the linear mapping you mentioned:
size
|
max |_____
| /
| /|
| / |
min |/ |
| |
/| |
0 /_|___|____
0 a
where min and max are the min and max sizes, and a=log(maxcount)-b. The line is of y=mx+c where x=log(count)-b
From the graph, we can see that the gradient, m, is (maxsize-minsize)/a.
We need x=0 at y=minsize, so log(mincount)-b=0 -> b=log(mincount)
This leaves us with the following python:
mincount = min(count)
maxcount = max(count)
xoffset = log(mincount)
gradient = (maxsize-minsize)/(log(maxcount)-log(mincount))
for c in count:
x = log(c)-xoffset
size = gradient * x + minsize
If you want to make sure that the minimum count is always at least 1, replace the first line with:
mincount = min(count+[1])
which appends 1 to the count list before doing the min. The same goes for making sure the maxcount is always at least 1. Thus your final code per above is:
from math import log
count = [1, 3, 5, 4, 7, 5, 10, 6]
def logdist(count, maxsize=1.75, minsize=.75):
countdist = []
mincount = min(count+[1])
maxcount = max(count+[1])
xoffset = log(mincount)
gradient = (maxsize-minsize)/(log(maxcount)-log(mincount))
for c in count:
x = log(c)-xoffset
size = gradient * x + minsize
countdist.append({'count': c, 'size': round(size, 3)})
return countdist
what you have is that you have tags whose counts are from MIN to MAX; the threshold issue can be ignored here because it amounts to setting every count below threshold to the threshold value and taking the minimum and maximum only afterwards.
You want to map the tag counts to "weights" but in a "logarithmic fashion", which basically means (as I understand it) the following. First, the tags with count MAX get max_weight weight (in your example, 1.75):
weight(MAX) = max_weight
Secondly, the tags with the count MIN get min_weight weight (in your example, 0.75):
weight(MIN) = min_weight
Finally, it holds that when your count decreases by 1, the weight is multiplied with a constant K < 1, which indicates the steepness of the curve:
weight(x) = weight(x + 1) * K
Solving this, we get:
weight(x) = weight_max * (K ^ (MAX - x))
Note that with x = MAX, the exponent is zero and the multiplicand on the right becomes 1.
Now we have the extra requirement that weight(MIN) = min_weight, and we can solve:
weight_min = weight_max * (K ^ (MAX - MIN))
from which we get
K ^ (MAX - MIN) = weight_min / weight_max
and taking logarithm on both sides
(MAX - MIN) ln K = ln weight_min - ln weight_max
i.e.
ln K = (ln weight_min - ln weight_max) / (MAX - MIN)
The right hand side is negative as desired, because K < 1. Then
K = exp((ln weight_min - ln weight_max) / (MAX - MIN))
So now you have the formula to calculate K. After this you just apply for any count x between MIN and MAX:
weight(x) = max_weight * (K ^ (MAX - x))
And you are done.
On a log scale, you just plot the log of the numbers linearly (in other words, pretend you're plotting linearly, but take the log of the numbers to be plotted first).
The zero problem can't be solved analytically--you have to pick a minimum order of magnitude for your scale, and no matter what you can't ever reach zero. If you want to plot something at zero, your choices are to arbitrarily give it the minimum order of magnitude of the scale, or to omit it.
I don't have the exact answer, but i think you want to look up Linearizing Exponential Data. Start by calculate the equation of the line passing through the points and take the log of both sides of that equation.