Related
I'm trying to speed up steps 1-4 in the following code (the rest is setup that will be predetermined for my actual problem.)
% Given sizes:
m = 200;
n = 1e8;
% Given vectors:
value_vector = rand(m, 1);
index_vector = randi([0 200], n, 1);
% Objective: Determine the values for the values_grid based on indices provided by index_grid, which
% correspond to the indices of the value in value_vector
% 0. Preallocate
values = zeros(n, 1);
% 1. Remove "0" indices since these won't have values assigned
nonzero_inds = (index_vector ~= 0);
% 2. Examine only nonzero indices
value_inds = index_vector(nonzero_inds);
% 3. Get the values for these indices
nonzero_values = value_vector(value_inds);
% 4. Assign values to output (0 for those with 0 index)
values(nonzero_inds) = nonzero_values;
Here's my analysis of these portions of the code:
Necessary since the index_vector will contain zeros which need to be ferreted out. O(n) since it's just a matter of going through the vector one element at a time and checking (value ∨ 0)
Should be O(n) to go through index_vector and retain those that are nonzero from the previous step
Should be O(n) since we have to check each nonzero index_vector element, and for each element we access the value_vector which is O(1).
Should be O(n) to go through each element of nonzero_inds, access corresponding values index, access the corresponding nonzero_values element, and assign it to the values vector.
The code above takes about 5 seconds to run through steps 1-4 on 4 cores, 3.8GHz. Do you all have any ideas on how this could be sped up? Thanks.
Wow, I found something really interesting. I saw this link in the "related" section about indexing vectors being inefficient in Matlab sometimes, so I decided to try a for loop. This code ended up being an order of magnitude faster!
for i = 1:n
if index_vector(i) > 0
values(i) = value_vector(index_vector(i));
end
end
EDIT: Another interesting thing, unfortunately detrimental to my problem though. The speed of this solution depends on the amount of zeros in the index_vector. With index_vector = randi([0 200]);, a small proportion of the values are zeros, but if I try index_vector = randi([0 1]), approximately half of the values will be zero and then the above for loop is actually an order of magnitude slower. However, using ~= instead of > speeds the loop back up so that it's on a similar order of magnitude. Very interesting and odd behavior.
if you stick to matlab and the flow of the algorithm you want , and not doing this in fortran or c, here's a small start:
change the randi to rand, and round by casting to uint8 and use the > logical operation that for some reason is faster at my end
to sum up:
value_vector = rand(m, 1 );
index_vector = uint8(-0.5+201*rand(n,1) );
values = zeros(n, 1);
values=value_vector(index_vector(index_vector>0));
this improved at my end by a factor 1.6
I just got the following interview question:
Given a list of float numbers, insert “+”, “-”, “*” or “/” between each consecutive pair of numbers to find the maximum value you can get. For simplicity, assume that all operators are of equal precedence order and evaluation happens from left to right.
Example:
(1, 12, 3) -> 1 + 12 * 3 = 39
If we built a recursive solution, we would find that we would get an O(4^N) solution. I tried to find overlapping sub-problems (to increase the efficiency of this algorithm) and wasn't able to find any overlapping problems. The interviewer then told me that there wasn't any overlapping subsolutions.
How can we detect when there are overlapping solutions and when there isn't? I spent a lot of time trying to "force" subsolutions to appear and eventually the Interviewer told me that there wasn't any.
My current solution looks as follows:
def maximumNumber(array, current_value=None):
if current_value is None:
current_value = array[0]
array = array[1:]
if len(array) == 0:
return current_value
return max(
maximumNumber(array[1:], current_value * array[0]),
maximumNumber(array[1:], current_value - array[0]),
maximumNumber(array[1:], current_value / array[0]),
maximumNumber(array[1:], current_value + array[0])
)
Looking for "overlapping subproblems" sounds like you're trying to do bottom up dynamic programming. Don't bother with that in an interview. Write the obvious recursive solution. Then memoize. That's the top down approach. It is a lot easier to get working.
You may get challenged on that. Here was my response the last time that I was asked about that.
There are two approaches to dynamic programming, top down and bottom up. The bottom up approach usually uses less memory but is harder to write. Therefore I do the top down recursive/memoize and only go for the bottom up approach if I need the last ounce of performance.
It is a perfectly true answer, and I got hired.
Now you may notice that tutorials about dynamic programming spend more time on bottom up. They often even skip the top down approach. They do that because bottom up is harder. You have to think differently. It does provide more efficient algorithms because you can throw away parts of that data structure that you know you won't use again.
Coming up with a working solution in an interview is hard enough already. Don't make it harder on yourself than you need to.
EDIT Here is the DP solution that the interviewer thought didn't exist.
def find_best (floats):
current_answers = {floats[0]: ()}
floats = floats[1:]
for f in floats:
next_answers = {}
for v, path in current_answers.iteritems():
next_answers[v + f] = (path, '+')
next_answers[v * f] = (path, '*')
next_answers[v - f] = (path, '-')
if 0 != f:
next_answers[v / f] = (path, '/')
current_answers = next_answers
best_val = max(current_answers.keys())
return (best_val, current_answers[best_val])
Generally the overlapping sub problem approach is something where the problem is broken down into smaller sub problems, the solutions to which when combined solve the big problem. When these sub problems exhibit an optimal sub structure DP is a good way to solve it.
The decision about what you do with a new number that you encounter has little do with the numbers you have already processed. Other than accounting for signs of course.
So I would say this is a over lapping sub problem solution but not a dynamic programming problem. You could use dive and conquer or evenmore straightforward recursive methods.
Initially let's forget about negative floats.
process each new float according to the following rules
If the new float is less than 1, insert a / before it
If the new float is more than 1 insert a * before it
If it is 1 then insert a +.
If you see a zero just don't divide or multiply
This would solve it for all positive floats.
Now let's handle the case of negative numbers thrown into the mix.
Scan the input once to figure out how many negative numbers you have.
Isolate all the negative numbers in a list, convert all the numbers whose absolute value is less than 1 to the multiplicative inverse. Then sort them by magnitude. If you have an even number of elements we are all good. If you have an odd number of elements store the head of this list in a special var , say k, and associate a processed flag with it and set the flag to False.
Proceed as before with some updated rules
If you see a negative number less than 0 but more than -1, insert a / divide before it
If you see a negative number less than -1, insert a * before it
If you see the special var and the processed flag is False, insert a - before it. Set processed to True.
There is one more optimization you can perform which is removing paris of negative ones as candidates for blanket subtraction from our initial negative numbers list, but this is just an edge case and I'm pretty sure you interviewer won't care
Now the sum is only a function of the number you are adding and not the sum you are adding to :)
Computing max/min results for each operation from previous step. Not sure about overall correctness.
Time complexity O(n), space complexity O(n)
const max_value = (nums) => {
const ops = [(a, b) => a+b, (a, b) => a-b, (a, b) => a*b, (a, b) => a/b]
const dp = Array.from({length: nums.length}, _ => [])
dp[0] = Array.from({length: ops.length}, _ => [nums[0],nums[0]])
for (let i = 1; i < nums.length; i++) {
for (let j = 0; j < ops.length; j++) {
let mx = -Infinity
let mn = Infinity
for (let k = 0; k < ops.length; k++) {
if (nums[i] === 0 && k === 3) {
// If current number is zero, removing division
ops.splice(3, 1)
dp.splice(3, 1)
continue
}
const opMax = ops[j](dp[i-1][k][0], nums[i])
const opMin = ops[j](dp[i-1][k][1], nums[i])
mx = Math.max(opMax, opMin, mx)
mn = Math.min(opMax, opMin, mn)
}
dp[i].push([mx,mn])
}
}
return Math.max(...dp[nums.length-1].map(v => Math.max(...v)))
}
// Tests
console.log(max_value([1, 12, 3]))
console.log(max_value([1, 0, 3]))
console.log(max_value([17,-34,2,-1,3,-4,5,6,7,1,2,3,-5,-7]))
console.log(max_value([59, 60, -0.000001]))
console.log(max_value([0, 1, -0.0001, -1.00000001]))
I am working on some matrices related problems in c++. I want to solve the problem: Y = aX + Y, where X and Y are matrices and a is a constant. I thought about using the daxpy BLAS routine, however, DAXPY according to the documentation is a vectors routine and I am not getting the same results as when I solve the same problem in matlab.
I am currently running this:
F77NAME(daxpy)(N, a, X, 1, Y, 1);
When you need to perform operation Y=a*X+Y it does not matter if X',Y` are 1D or 2D matrices, since the operation is done element-wise.
So, If you allocated the matrices in single pointers double A[] = new[] (M*N);, then you can use daxpy by defining the dimension of the vector as M*N
int MN = M*N;
int one = 1;
F77NAME(daxpy)(&MN, &a, &X, &one, &Y, &one);
Same goes with stack two dimension matrix double A[3][2]; as this memory is allocated in sequence.
Otherwise, you need to use a for loop and add each row separately.
For some reason I can't get over this in Octave:
for i=1:n
y(2:(i+1))=y(2:(i+1))-x(i)*y(1:i)
end;
If I break it down in steps (suppose n=3), wouldn't the loop look like this:
i=1
y(2)=y(2)-x(1)*y(1)
i=2
y(2)=y(2)-x(2)*y(1)
y(3)=y(3)-x(2)*y(2)
i=3
y(2)=y(2)-x(3)*y(1)
y(3)=y(3)-x(3)*y(2)
y(4)=y(4)-x(3)*y(3)
Well, I must be wrong because the results are not good when doing the loop step by step, but for the life of me I can't figure it out where. Can someone please help me?
First of all, forgive me my styling, I never used matrices/vector representations in Stack Overflow before. Anyway I hope this gives you an idea of how it internally works:
x = [1,2,3]
y = [1,0,0,0]
Step 1:
the first loop will execute:
y(2)=y(2)-x(1)*y(1)
these are just scalar values, y(2) = 0, x(1) = 1, y(1) = 1.
So y(2) = 0-1*1 = -1, which means that the 2nd position in vector y will become -1.
resulting in y = [1,-1,0,0]
Step 2:
The next loop will execute
Here y(2,3) and y(1,2) are vectors of size 2, where the values are the ones that correspond with the position in y. After calculating the new vector [-3,2]
this will be assigned to the 2nd and 3th position in vector y. Resulting in the vector [1,-3,2,0].
Step 3:
Repeat the step 2 but this time use vectors of size 3, and replace the outcome with the 2,3,4 position in the y matrix results in the final vector y being: [1,-6,11,-6]
For finding the position of a fraction in farey sequence, i tried to implement the algorithm given here http://www.math.harvard.edu/~corina/publications/farey.pdf under "initial algorithm" but i can't understand where i'm going wrong, i am not getting the correct answers . Could someone please point out my mistake.
eg. for order n = 7 and fractions 1/7 ,1/6 i get same answers.
Here's what i've tried for given degree(n), and a fraction a/b:
sum=0;
int A[100000];
A[1]=a;
for(i=2;i<=n;i++)
A[i]=i*a-a;
for(i=2;i<=n;i++)
{
for(j=i+i;j<=n;j+=i)
A[j]-=A[i];
}
for(i=1;i<=n;i++)
sum+=A[i];
ans = sum/b;
Thanks.
Your algorithm doesn't use any particular properties of a and b. In the first part, every relevant entry of the array A is a multiple of a, but the factor is independent of a, b and n. Setting up the array ignoring the factor a, i.e. starting with A[1] = 1, A[i] = i-1 for 2 <= i <= n, after the nested loops, the array contains the totients, i.e. A[i] = phi(i), no matter what a, b, n are. The sum of the totients from 1 to n is the number of elements of the Farey sequence of order n (plus or minus 1, depending on which of 0/1 and 1/1 are included in the definition you use). So your answer is always the approximation (a*number of terms)/b, which is close but not exact.
I've not yet looked at how yours relates to the algorithm in the paper, check back for updates later.
Addendum: Finally had time to look at the paper. Your initialisation is not what they give. In their algorithm, A[q] is initialised to floor(x*q), for a rational x = a/b, the correct initialisation is
for(i = 1; i <= n; ++i){
A[i] = (a*i)/b;
}
in the remainder of your code, only ans = sum/b; has to be changed to ans = sum;.
A non-algorithmic way of finding the position t of a fraction in the Farey sequence of order n>1 is shown in Remark 7.10(ii)(a) of the paper, under m:=n-1, where mu-bar stands for the number-theoretic Mobius function on positive integers taking values from the set {-1,0,1}.
Here's my Java solution that works. Add head(0/1), tail(1/1) nodes to a SLL.
Then start by passing headNode,tailNode and setting required orderLevel.
public void generateSequence(Node leftNode, Node rightNode){
Fraction left = (Fraction) leftNode.getData();
Fraction right= (Fraction) rightNode.getData();
FractionNode midNode = null;
int midNum = left.getNum()+ right.getNum();
int midDenom = left.getDenom()+ right.getDenom();
if((midDenom <=getMaxLevel())){
Fraction middle = new Fraction(midNum,midDenom);
midNode = new FractionNode(middle);
}
if(midNode!= null){
leftNode.setNext(midNode);
midNode.setNext(rightNode);
generateSequence(leftNode, midNode);
count++;
}else if(rightNode.next()!=null){
generateSequence(rightNode, rightNode.next());
}
}