I'm trying to implement something in sage and I keep getting the following error:
*Error in lines 38-53
Traceback (most recent call last):
File "/projects/42e45a19-7a43-4495-8dcd-353625dfce66/.sagemathcloud/sage_server.py", line 879, in execute
exec compile(block+'\n', '', 'single') in namespace, locals
File "", line 13, in <module>
File "sage/modules/vector_integer_dense.pyx", line 185, in sage.modules.vector_integer_dense.Vector_integer_dense.__setitem__ (build/cythonized/sage/modules/vector_integer_dense.c:3700)
raise ValueError("vector is immutable; please change a copy instead (use copy())")
ValueError: vector is immutable; please change a copy instead (use copy())*
I have pinpointed the exact location (the line between "print 'marker 1'" and "print 'marker 2'" in the while-loop at the end, see code below) and it seems that I'm not allowed to change the entries of the matrix "weights" (which I defined before the loop) from inside the loop. The error message says to use the copy() function, but I don't see how that would solve my problem since I would only be making a local copy and the next iteration of the loop wouldn't get these changed values, right? So does anyone know how to define this matrix so that I can change it from inside the loop? If it's not possible, can someone explain why?
Thanks for your help.
Code:
m = 3 # Dimension of inputs to nodes
n = 1 # Dimension of output
v = 4 # Number of training vectors
r = 0.1 # Learning Rate
T = 10 # Number of iterations
# Input static Biases, i.e. sum must be smaller than this vector. For dynamic biases, set this vector to 0, increase m by one and set xi[0]=-1 for all inputs i (and start the acual input at xi[1])
bias = list(var('s_%d' % i) for i in range(n))
bias[0] = 0.5
# Input the training vectors and targets
x0 = list(var('s_%d' % i) for i in range(m))
x0[0]=1
x0[1]=0
x0[2]=0
target00=1
x1 = list(var('s_%d' % i) for i in range(m))
x1[0]=1
x1[1]=0
x1[2]=1
target10=1
x2 = list(var('s_%d' % i) for i in range(m))
x2[0]=1
x2[1]=1
x2[2]=0
target20=1
x3 = list(var('s_%d' % i) for i in range(m))
x3[0]=1
x3[1]=1
x3[2]=1
target30=0
targets = matrix(v,n,[[target00],[target10],[target20],[target30]])
g=matrix([x0,x1,x2,x3])
inputs=copy(g)
# Initialize weights, or leave at 0 (i.e.,change nothing)
weights=matrix(m,n)
print weights.transpose()
z = 0
a = list(var('s_%d' % j) for j in range(n))
while(z<T):
Q = inputs*weights
S = copy(Q)
for i in range(v):
y = copy(a)
for j in range(n):
if S[i][j] > bias[j]:
y[j] = 1
else:
y[j] = 0
for k in range(m):
print 'marker 1'
weights[k][j] = weights[k][j] + r*(targets[i][j]-y[j])*inputs[i][k]
print 'marker 2'
print weights.transpose
z +=1
This is a basic property of Sage vectors - they are immutable Python objects by default.
sage: M = matrix([[2,3],[3,2]])
sage: M[0][1] = 5
---------------------------------------------------------------------------
<snip>
ValueError: vector is immutable; please change a copy instead (use copy())
Notice that the error is that the vector is immutable. That is because you have taken the 0 row, which is a vector (immutable, hashable I guess, etc.).
But if you use the following syntax, you should be golden.
sage: M[0,1] = 5
sage: M
[2 5]
[3 2]
Here you are modifying the element directly. Hope this helps, enjoy Sage!
Related
I'd really appreciate some help on parallelizing the following pseudo code in Julia (and I do apologize in advance for the long post):
P, Q # both K by N matrix, K = num features and N = num samples
X, Y # K*4 by N and K*2 by N matrices
tempX, tempY # column vectors of size K*4 and K*2
ndata # a dict from parsing a .m file to be used by a solver with JuMP and Ipopt
# serial version
for i = 1:N
ndata[P] = P[:, i] # technically requires a for loop from 1 to K since the dict has to be indexed element-wise
ndata[Q] = Q[:, i]
ndata_A = run_solver_A(ndata) # with a third-party package and JuMP, Ipopt
ndata_B = run_solver_B(ndata)
kX = 1, kY = 1
for j = 1:K
tempX[kX:kX+3] = [ndata_A[j][a], ndata_A[j][b], P[j, i], Q[j, i]]
tempY[kY:kY+1] = [ndata_B[j][a], ndata_B[j][b]]
kX += 4
kY += 2
end
X[:, i] = deepcopy(tempX)
Y[:, i] = deepcopy(tempY)
end
So obviously, this for loop can be executed independently as long as no columns of P and Q is accessed twice and the same column i of P and Q are accessed at a time. The only thing I need to be careful about is that column i of X and Y are correct pairs of tempX and tempY, and I don't care as much about whether the i = 1, ..., N order is maintained (hopefully that makes sense!).
I read both the official documentation and some online tutorials, and wrote the following with #spawn and fetch that works for the insertion part by replacing the ndata[j][a] etc. with placeholder numbers 1.0 and 180:
using Distributed
addprocs(2)
num_proc = nprocs()
#everywhere function insertPQ(P, Q)
println(myid())
data = zeros(4*length(P))
k = 1
for i = 1:length(P)
data[k:k+3] = [1.0, 180., P[i], Q[i]]
k += 4
end
return data
end
P = [0.99, 0.99, 0.99, 0.99]
Q = [-0.01, -0.01, -0.01, -0.01]
for i = 1:5 # should be 4 x 32
global P = hcat(P, (P .- 0.01))
global Q = hcat(Q, (Q .- 0.01))
end
datas = zeros(16, 0) # serial result
datap = zeros(16, 32) # parallel result
#time for i = 1:32
s = fetch(#spawn insertPQ(P[:, i], Q[:, i]))
global datap = hcat(datap, s)
end
#time for i = 1:32
k = 1
for j = 1:4
datas[k:k+3, i] = [1.0, 180., P[j, i], Q[j, i]]
k += 4
end
end
println(datap == datas)
The above code is fine but I did notice the output was consistently worker 2->3->4->5->2... and was much slower than the serial case (I'm testing this on my laptop with only 4 cores, but eventually I'll run it on a cluster). It took forever to run when added in the run_solver_A/B in the insertPQ() that I had to stop it.
As for pmap(), I couldn't figure out how to pass an entire vector to the function. I probably misunderstood the documentation but "Transform collection c by applying f to each element using available workers and tasks" sounds like I can only do this element-wise? That can't be it. I went to a Julia intro session last week and asked the lecturer about this. He said I should use pmap and I've been trying to make it work since.
So, how can I parallelize the my original pseudo code? Any help or suggestion is greatly appreciated!
I'm trying to convert a base-10 integer k into a base-q integer, but not in the standard way. Firstly, I'd like my result to be a vectors (or a string 'a,b,c,...' so that it can be converted to a vector, but not 'abc...'). Most importantly, I'd like each 'digit' to be in base-10. As an example, suppose I have the number 23 (in base-10) and I want to convert it to base-12. This would be 1B in the standard 1,...,9,A,B notation; however, I want it to come out as [1, 11]. I'm only interested in numbers k with 0 \le k \le n^q - 1, where n is fixed in advance.
Put another way, I wish to find coefficients a(r) such that
k = \sum_{r=0}^{n-1} a(r) q^r
where each a(r) is in base-10. (Note that 0 \le a(r) \le q-1.)
I know I could do this with a for-loop -- struggling to get the exact formula at the moment! -- but I want to do it vectorised, or with a fast internal function.
However, I want to be able to take n to be large, so would prefer a faster way than this. (Of course, I could change this to a parfor-loop or do it on the GPU; these aren't practical for my current situation, so I'd prefer a more direct version.)
I've looked at stuff like dec2base, num2str, str2num, base2dec and so on, but with no luck. Any suggestion would be most appreciated.
Regarding speed and space, any preallocation for integers in the range [0, q-1] or similar would also be good.
To be clear, I am looking for an algorithm that works for any q and n, converting any number in the range [0,q^n - 1].
You can use dec2base and replace the characters by numbers:
x = 23;
b = 12;
[~, result] = ismember(dec2base(x,b), ['0':'9' 'A':'Z']);
result = result -1;
gives
>> result
result =
1 11
This works for base up to 36 only, due to dec2base limitations.
For any base (possibly above 36) you need to do the conversion manually. I once wrote a base2base function to do that (it's essentially long division). The number should be input as a vector of digits in the origin base, so you need dec2base(...,10) first. For example:
x = 125;
b = 6;
result = base2base(dec2base(x,10), '0':'9', b); % origin nunber, origin base, target base
gives
result =
3 2 5
Or if you need to specify the number of digits:
x = 125;
b = 6;
d = 5;
result = base2base(dec2base(x,10), '0':'9', b, d)
result =
0 0 3 2 5
EDIT (August 15, 2017): Corrected two bugs: handling of input consisting of all "zeros" (thanks to #Sanchises for noticing), and properly left-padding the output with "zeros" if needed.
function Z = base2base(varargin)
% Three inputs: origin array, origin base, target base
% If a base is specified by a number, say b, the digits are [0,1,...,d-1].
% The base can also be directly an array with the digits
% Fourth input, optional: how many digits the output should have as a
% minimum (padding with leading zeros, i.e with the first digit)
% Non-valid digits in origin array are discarded.
% It works with cell arrays. In this case it gives a matrix in which each
% row is padded with leading zeros if needed
% If the base is specified as a number, digits are numbers, not
% characters as in `dec2base` and `base2dec`
if ~iscell(varargin{1}), varargin{1} = varargin(1); end
if numel(varargin{2})>1, ax = varargin{2}; bx=numel(ax); else bx = varargin{2}; ax = 0:bx-1; end
if numel(varargin{3})>1, az = varargin{3}; bz=numel(az); else bz = varargin{3}; az = 0:bz-1; end
Z = cell(size(varargin{1}));
for c = 1:numel(varargin{1})
x = varargin{1}{c}; [valid, x] = ismember(x,ax); x = x(valid)-1;
if ~isempty(x) && ~any(x) % Non-empty input, all zeros
z = 0;
elseif ~isempty(x) % Non-empty input, at least a nonzero
z = NaN(1,ceil(numel(x)*log2(bx)/log2(bz))); done_outer = false;
n = 0;
while ~done_outer
n = n + 1;
x = [0 x(find(x,1):end)];
y = NaN(size(x)); done_inner = false;
m = 0;
while ~done_inner
m = m + 1;
t = x(1)*bx+x(2);
r = mod(t, bz); q = (t-r)/bz;
y(m) = q; x = [r x(3:end)];
done_inner = numel(x) < 2;
end
y = y(1:m);
z(n) = r; x = y; done_outer = ~any(x);
end
z = z(n:-1:1);
else % Empty input
z = []; % output will be empty (unless user has required left-padding) with the
% appropriate class
end
if numel(varargin)>=4 && numel(z)<varargin{4}, z = [zeros(1,varargin{4}-numel(z)) z]; end
% left-pad if required by user
Z{c} = z;
end
L = max(cellfun(#numel, Z));
Z = cellfun(#(x) [zeros(1, L-numel(x)) x], Z, 'uniformoutput', false); % left-pad so that
% result will be a matrix
Z = vertcat(Z{:});
Z = az(Z+1);
Matlab's internal dec2base command contains essentially what you are asking for.
It actually creates an array of base-10 digits before they are converted to a character array of '0'-'9' and 'A'-'Z' which is the reason for its limitation to bases <= 36.
So after removing the last step of character conversion from dec2base and modifying the error checking accordingly gives the function dec2basevect you were asking for.
The result will be a base-10 vector and you are no longer limited to bases <= 36. The most significant digit will be in index one of this vector. If you need it the other way round, i.e. least significant digit in index one, just do a fliplr to the result.
Due to copyrights by MathWorks, you have to make the necessary modifications to dec2baseon your own.
I have a variable, between 0 and 1, which should dictate the likelyhood that a second variable, a random number between 0 and 1, is greater than 0.5. In other words, if I were to generate the second variable 1000 times, the average should be approximately equal to the first variable's value. How do I make this code?
Oh, and the second variable should always be capable of producing either 0 or 1 in any condition, just more or less likely depending on the value of the first variable. Here is a link to a graph which models approximately how I would like the program to behave. Each equation represents a separate value for the first variable.
You have a variable p and you are looking for a mapping function f(x) that maps random rolls between x in [0, 1] to the same interval [0, 1] such that the expected value, i.e. the average of all rolls, is p.
You have chosen the function prototype
f(x) = pow(x, c)
where c must be chosen appropriately. If x is uniformly distributed in [0, 1], the average value is:
int(f(x) dx, [0, 1]) == p
With the integral:
int(pow(x, c) dx) == pow(x, c + 1) / (c + 1) + K
one gets:
c = 1/p - 1
A different approach is to make p the median value of the distribution, such that half of the rolls fall below p, the other half above p. This yields a different distribution. (I am aware that you didn't ask for that.) Now, we have to satisfy the condition:
f(0.5) == pow(0.5, c) == p
which yields:
c = log(p) / log(0.5)
With the current function prototype, you cannot satisfy both requirements. Your function is also asymmetric (f(x, p) != f(1-x, 1-p)).
Python functions below:
def medianrand(p):
"""Random number between 0 and 1 whose median is p"""
c = math.log(p) / math.log(0.5)
return math.pow(random.random(), c)
def averagerand(p):
"""Random number between 0 and 1 whose expected value is p"""
c = 1/p - 1
return math.pow(random.random(), c)
You can do this by using a dummy. First set the first variable to a value between 0 and 1. Then create a random number in the dummy between 0 and 1. If this dummy is bigger than the first variable, you generate a random number between 0 and 0.5, and otherwise you generate a number between 0.5 and 1.
In pseudocode:
real a = 0.7
real total = 0.0
for i between 0 and 1000 begin
real dummy = rand(0,1)
real b
if dummy > a then
b = rand(0,0.5)
else
b = rand(0.5,1)
end if
total = total + b
end for
real avg = total / 1000
Please note that this algorithm will generate average values between 0.25 and 0.75. For a = 1 it will only generate random values between 0.5 and 1, which should average to 0.75. For a=0 it will generate only random numbers between 0 and 0.5, which should average to 0.25.
I've made a sort of pseudo-solution to this problem, which I think is acceptable.
Here is the algorithm I made;
a = 0.2 # variable one
b = 0 # variable two
b = random.random()
b = b^(1/(2^(4*a-1)))
It doesn't actually produce the average results that I wanted, but it's close enough for my purposes.
Edit: Here's a graph I made that consists of a large amount of datapoints I generated with a python script using this algorithm;
import random
mod = 6
div = 100
for z in xrange(div):
s = 0
for i in xrange (100000):
a = (z+1)/float(div) # variable one
b = random.random() # variable two
c = b**(1/(2**((mod*a*2)-mod)))
s += c
print str((z+1)/float(div)) + "\t" + str(round(s/100000.0, 3))
Each point in the table is the result of 100000 randomly generated points from the algorithm; their x positions being the a value given, and their y positions being their average. Ideally they would fit to a straight line of y = x, but as you can see they fit closer to an arctan equation. I'm trying to mess around with the algorithm so that the averages fit the line, but I haven't had much luck as of yet.
I have an m x n array: a, where the integers m > 1E6, and n <= 5.
I have functions F and G, which are composed like this: F( u, G ( u, t)). u is a 1 x n array, t is a scalar, and F and G returns 1 x n arrays.
I need to evaluate each row of a in F, and use previously evaluated row as the u-array for the next evaluation. I need to make m such evaluations.
This has to be really fast. I was previously impressed by scitools.std StringFunction evaluaion for a whole array, but this problem requires using the previously calculated array as an argument in calculating the next. I don't know if StringFunction can do this.
For example:
a = zeros((1000000, 4))
a[0] = asarray([1.,69.,3.,4.1])
# A is a float defined elsewhere, h is a function which accepts a float as its argument and returns an arbitrary float. h is defined elsewhere.
def G(u, t):
return asarray([u[0], u[1]*A, cos(u[2]), t*h(u[3])])
def F(u, t):
return u + G(u, t)
dt = 1E-6
for i in range(1, 1000000):
a[i] = F(a[i-1], i*dt)
i += 1
The problem with the above code is that it is slow as hell. I need to get these calculations done by numpy milliseconds.
How can I do what I want?
Thank you for our time.
Kind regards,
Marius
This sort of thing is very difficult to do in numpy. If we look at this by column we see a few simpler solutions.
a[:,0] is very easy:
col0 = np.ones((1000))*2
col0[0] = 1 #Or whatever start value.
np.cumprod(col0, out=col0)
np.allclose(col0, a[:1000,0])
True
As mentioned earlier this will overflow very quickly. a[:,1] can be done much along the same lines.
I do not believe there is a way to do the next two columns inside numpy alone quickly. We can turn to numba for this:
from numba import auotojit
def python_loop(start, count):
out = np.zeros((count), dtype=np.double)
out[0] = start
for x in xrange(count-1):
out[x+1] = out[x] + np.cos(out[x+1])
return out
numba_loop = autojit(python_loop)
np.allclose(numba_loop(3,1000),a[:1000,2])
True
%timeit python_loop(3,1000000)
1 loops, best of 3: 4.14 s per loop
%timeit numba_loop(3,1000000)
1 loops, best of 3: 42.5 ms per loop
Although its worth pointing out that this converges to pi/2 very very quickly and there is little point in calculating this recursion past ~20 values for any start value. This returns the exact same answer to double point precision- I didn't bother finding the cutoff, but it is much less then 50:
%timeit tmp = np.empty((1000000));
tmp[:50] = numba_loop(3,50);
tmp[50:] = np.pi/2
100 loops, best of 3: 2.25 ms per loop
You can do something similar with the fourth column. Of course you can autojit all of the functions, but this gives you several different options to try out depending on numba usage:
Use cumprod for the first two columns
Use an approximation for column 3 (and possible 4) where only the first few iterations are calculated
Implement columns 3 and 4 in numba using autojit
Wrap everything inside of an autojit loop (the best option)
The way you have presented this all rows past ~200 will either be np.inf or np.pi/2. Exploit this.
Slightly faster. Your first column is basicly 2^n. Calculating 2^n for n up to 1000000 is gonna overflow.. second column is even worse.
def calc(arr, t0=1E-6):
u = arr[0]
dt = 1E-6
h = lambda x: np.random.random(1)*50.0
def firstColGen(uStart):
u = uStart
while True:
u += u
yield u
def secondColGen(uStart, A):
u = uStart
while True:
u += u*A
yield u
def thirdColGen(uStart):
u = uStart
while True:
u += np.cos(u)
yield u
def fourthColGen(uStart, h, t0, dt):
u = uStart
t = t0
while True:
u += h(u) * dt
t += dt
yield u
first = firstColGen(u[0])
second = secondColGen(u[1], A)
third = thirdColGen(u[2])
fourth = fourthColGen(u[3], h, t0, dt)
for i in xrange(1, len(arr)):
arr[i] = [first.next(), second.next(), third.next(), fourth.next()]
I have been doing linear programming problems in my class by graphing them but I would like to know how to write a program for a particular problem to solve it for me. If there are too many variables or constraints I could never do this by graphing.
Example problem, maximize 5x + 3y with constraints:
5x - 2y >= 0
x + y <= 7
x <= 5
x >= 0
y >= 0
I graphed this and got a visible region with 3 corners. x=5 y=2 is the optimal point.
How do I turn this into code? I know of the simplex method. And very importantly, will all LP problems be coded in the same structure? Would brute force work?
There are quite a number of Simplex Implementations that you will find if you search.
In addition to the one mentioned in the comment (Numerical Recipes in C),
you can also find:
Google's own Simplex-Solver
Then there's COIN-OR
GNU has its own GLPK
If you want a C++ implementation, this one in Google Code is actually accessible.
There are many implementations in R including the boot package. (In R, you can see the implementation of a function by typing it without the parenthesis.)
To address your other two questions:
Will all LPs be coded the same way? Yes, a generic LP solver can be written to load and solve any LP. (There are industry standard formats for reading LP's like mps and .lp
Would brute force work? Keep in mind that many companies and big organizations spend a long time on fine tuning the solvers. There are LP's that have interesting properties that many solvers will try to exploit. Also, certain computations can be solved in parallel. The algorithm is exponential, so at some large number of variables/constraints, brute force won't work.
Hope that helps.
I wrote this is matlab yesterday, which could be easily transcribed to C++ if you use Eigen library or write your own matrix class using a std::vector of a std::vector
function [x, fval] = mySimplex(fun, A, B, lb, up)
%Examples paramters to show that the function actually works
% sample set 1 (works for this data set)
% fun = [8 10 7];
% A = [1 3 2; 1 5 1];
% B = [10; 8];
% lb = [0; 0; 0];
% ub = [inf; inf; inf];
% sample set 2 (works for this data set)
fun = [7 8 10];
A = [2 3 2; 1 1 2];
B = [1000; 800];
lb = [0; 0; 0];
ub = [inf; inf; inf];
% generate a new slack variable for every row of A
numSlackVars = size(A,1); % need a new slack variables for every row of A
% Set up tableau to store algorithm data
tableau = [A; -fun];
tableau = [tableau, eye(numSlackVars + 1)];
lastCol = [B;0];
tableau = [tableau, lastCol];
% for convienience sake, assign the following:
numRows = size(tableau,1);
numCols = size(tableau,2);
% do simplex algorithm
% step 0: find num of negative entries in bottom row of tableau
numNeg = 0; % the number of negative entries in bottom row
for i=1:numCols
if(tableau(numRows,i) < 0)
numNeg = numNeg + 1;
end
end
% Remark: the number of negatives is exactly the number of iterations needed in the
% simplex algorithm
for iterations = 1:numNeg
% step 1: find minimum value in last row
minVal = 10000; % some big number
minCol = 1; % start by assuming min value is the first element
for i=1:numCols
if(tableau(numRows, i) < minVal)
minVal = tableau(size(tableau,1), i);
minCol = i; % update the index corresponding to the min element
end
end
% step 2: Find corresponding ratio vector in pivot column
vectorRatio = zeros(numRows -1, 1);
for i=1:(numRows-1) % the size of ratio vector is numCols - 1
vectorRatio(i, 1) = tableau(i, numCols) ./ tableau(i, minCol);
end
% step 3: Determine pivot element by finding minimum element in vector
% ratio
minVal = 10000; % some big number
minRatio = 1; % holds the element with the minimum ratio
for i=1:numRows-1
if(vectorRatio(i,1) < minVal)
minVal = vectorRatio(i,1);
minRatio = i;
end
end
% step 4: assign pivot element
pivotElement = tableau(minRatio, minCol);
% step 5: perform pivot operation on tableau around the pivot element
tableau(minRatio, :) = tableau(minRatio, :) * (1/pivotElement);
% step 6: perform pivot operation on rows (not including last row)
for i=1:size(vectorRatio,1)+1 % do last row last
if(i ~= minRatio) % we skip over the minRatio'th element of the tableau here
tableau(i, :) = -tableau(i,minCol)*tableau(minRatio, :) + tableau(i,:);
end
end
end
% Now we can interpret the algo tableau
numVars = size(A,2); % the number of cols of A is the number of variables
x = zeros(size(size(tableau,1), 1)); % for efficiency
% Check for basicity
for col=1:numVars
count_zero = 0;
count_one = 0;
for row = 1:size(tableau,1)
if(tableau(row,col) < 1e-2)
count_zero = count_zero + 1;
elseif(tableau(row,col) - 1 < 1e-2)
count_one = count_one + 1;
stored_row = row; % we store this (like in memory) column for later use
end
end
if(count_zero == (size(tableau,1) -1) && count_one == 1) % this is the case where it is basic
x(col,1) = tableau(stored_row, numCols);
else
x(col,1) = 0; % this is the base where it is not basic
end
end
% find function optimal value at optimal solution
fval = x(1,1) * fun(1,1); % just needed for logic to work here
for i=2:numVars
fval = fval + x(i,1) * fun(1,i);
end
end