Thread::tdlen: Objects of unequal length error in wolfram mathematica - wolfram-mathematica

I have error that
Thread::tdlen: Objects of unequal length in {0.*10^-16-1.000000000000000 I} {{0,0,0},{0,1,0},{0,0,0}} cannot be combined. >>
{0.*10^-16-1.000000000000000 I} {{0,0,0},{0,1,0},{0,0,0}}
I can't solve the error.

depends on what you are trying to do. This will multiply the first item by every element in the second matrix
{0.*10^-16-1.000000000000000 I}[[1]] {{0,0,0},{0,1,0},{0,0,0}}
{{0. + 0. I, 0. + 0. I, 0. + 0. I}, {0. + 0. I, 0. - 1. I,
0. + 0. I}, {0. + 0. I, 0. + 0. I, 0. + 0. I}}
Chop#% // MatrixForm
That said this sort of error is often simply due to forgetting a semicolon terminating a statement causing results on separate lines to be multiplied when you had no intention of multiplying them.

Related

how the base case D[0] is one where D[n] is the number of ways n can be expressed as sum of 1,3,4?

I am trying problem on dynamic problem which required to find the number of ways to find n into sum of 1,3 and 4.
i see solution on geeks for geeks.
base cases of the problem was d[0] = d[1] =d[2] = 1
how the d[0] = 1 where d[n] is the number of ways d[n] can be expresses as sum of 1,3,4.
d[0] should be zero as there is no way to express 0 as sum of 1,3 and 4.
this is the link for where solution is given.
https://www.geeksforgeeks.org/count-ofdifferent-ways-express-n-sum-1-3-4/
There is no way to express 0 as sum of 1,3 and 4
Yes, there is. The empty array is assumed to have sum 0. So choosing zero 1s, zero 2s and zero 3s is one way of obtaining 0 as sum of 1, 3 and 4.
d[0] should be zero as there is no way to express 0 as sum of 1,3 and 4.
This is matter of how you define it. The most convenient definition of "sum of 1,3 and 4" is "a value of the form a + 3b + 4c, where a, b, and c are nonnegative integers", and "way to express [value] as sum of 1,3 and 4" as the choice of a, b, and c.
You are apparently picturing a slightly stricter definition, that also requires a + b + c ≥ 1; that's not wrong, exactly, but it leaves you with more special cases to handle in your recursive case. It simplifies the calculations if you leave out that requirement.

how to find whole squares between 2 numbers

I want to find perfect square between two numbers A, B (numbers can be positive/negative). I also want to achieve time complexity of O(sqrt(abs(B))).
I wrote the following code for this :
count = (int)(Math.floor(Math.sqrt(Math.abs(B)) - Math.ceil(Math.sqrt(Math.abs(A))) + 1);
This normally works well but fails when the range is between -ve and +ve numbers.
For example is range is A = -1, B = 1. Then I think it should return 2 (0, 1) but returns 1.
I could not find the solution in other answers in SO. So, any help would be appreciated.
Let us assume A, B ≥ 0.
Then A ≤ n² ≤ B is equivalent to √A ≤ n ≤ √B and to ceil(√A) ≤ n ≤ floor(√B).
Thus, the number of solutions is floor(√B) - ceil(√A) + 1.
If A < 0, replace A by 0. Then if B < A, there is no solution.
Update by #Bathsheba:
Finally, if you don't want 0 to be considered to be a perfect square then replace "If A < 0, replace A by 0" with "If A < 1, replace A by 1."
There will be no perfect squares (unless we consider numbers with i from -infinity to 0. So you could/should throw an IllegalArgumentException on negative start number, or just set the start to 0.

Counting the strictly increasing sequences

I aligned the N candles from left to right. The ith candle from the left has the height Hi and the color Ci, an integer ranged from 1 to a given K, the number of colors.
Problem: , how many strictly increasing ( in height ) colorful subsequences are there? A subsequence is considered as colorful if every of the K colors appears at least one times in the subsequence.
For Ex: N=4 k= 3
H C
1 1
3 2
2 2
4 3
only two valid subsequences are (1, 2, 4) and (1, 3, 4)
I think it is a problem of Fenwick Tree please provide me a approach how to proceeded with such type of problems
For a moment, let's forget about the colors. So the problem is simpler: count the number of increasing subsequences. This problem has a standard solution:
1. Map each value to [0...n - 1] range.
2. Let's assume the f[value] is the number of increasing subsequences that have value as their last element.
3. Initially, f is filled with 0.
4. After that, you iterate over all array elements from left to right and perform the following operation: f[value] += 1 + get_sum(0, value - 1)(it means that you add this element to all possible subsequences so that they remain strictly increasing), where value is the current element of the array and get_sum(a, b) returns the sum of f[a] + f[a + 1] + ... + f[b].
5. The answer is f[0] + f[1] + ... + f[n - 1].
Using binary index tree(aka Fenwick tree), it is possible to do get_sum operation in O(log n) and get O(n log n) total time complexity.
Now let's come back to the original problem. To take into account the colors, we can compute f[value, mask] instead of f[value](that is, the number of increasing subsequences that have value as their last element and mask(it is a bitmask that shows which colors are present) colors). Then an update for each element looks like this:
for mask in [0...2^K - 1]:
f[value, mask or 2^(color[i] - 1)] += 1 + get_sum(0, value - 1, mask)
The answer is f[0, 2^K - 1] + f[1, 2^K - 1] + ... + f[n - 1, 2^K - 1].
You can maintain 2^K binary index trees to achieve O(n * log n * 2^K) time complexity using the same idea as in a simpler problem.

How can I find a faster algorithm for this special case of Longest Common Sub-sequence (LCS)?

I know the LCS problem need time ~ O(mn) where m and n are length of two sequence X and Y respectively. But my problem is a little bit easier so I expect a faster algorithm than ~O(mn).
Here is my problem:
Input:
a positive integer Q, two sequence X=x1,x2,x3.....xn and Y=y1,y2,y3...yn, both of length n.
Output:
True, if the length of the LCS of X and Y is at least n - Q;
False, otherwise.
The well-known algorithm costs O(n^2) here, but actually we can do better than that. Because whenever we eliminate as many as Q elements in either sequence without finding a common element, the result returns False. Someone said there should be an algorithm as good as O(Q*n), but I cannot figure out.
UPDATE:
Already found an answer!
I was told I can just calculate the diagonal block of the table c[i,j], because if |i-j|>Q, means there are already more than Q unmatched elements in both sequences. So we only need to calculate the c[i,j] when |i-j|<=Q.
Here is one possible way to do it:
1. Let's assume that f(prefix_len, deleted_cnt) is the leftmost position in Y such that prefix_len elements of X were already processed and exactly deleted_cnt of them were deleted. Obviously, there are only O(N * Q) states because deleted_cnt cannot exceed Q.
2. The base case is f(0, 0) = 0(nothing was processed, thus nothing was deleted).
3. Transitions:
a) Remove the current element: f(i + 1, j + 1) = min(f(i + 1, j + 1), f(i, j)).
b) Match the current element with the leftmost possible element from Y that is equal to it and located after f(i, j)(let's assume that it has index pos): f(i + 1, j) = min(f(i + 1, j), pos).
4. So the only question remaining is how to get the leftmost matching element located to the right from a given position. Let's precompute the following pairs: (position in Y, element of X) -> the leftmost occurrence of the element of Y equal to this element of X to the right from this position in Y and put them into a hash table. It looks like O(n^2). But is not. For a fixed position in Y, we never need to go further to the right from it than by Q + 1 positions. Why? If we go further, we skip more than Q elements! So we can use this fact to examine only O(N * Q) pairs and get desired time complexity. When we have this hash table, finding pos during the step 3 is just one hash table lookup. Here is a pseudo code for this step:
map = EmptyHashMap()
for i = 0 ... n - 1:
for j = i + 1 ... min(n - 1, i + q + 1)
map[(i, Y[j])] = min(map[(i, Y[j])], j)
Unfortunately, this solution uses hash tables so it has O(N * Q) time complexity on average, not in the worst case, but it should be feasible.
You can also say cost of the process to make the string equal must not be greater than Q.if it greater than Q than answer must be false.(EDIT DISTANCE PROBLEM)
Suppose of the of string x is m, and the size of string y is n, then we create a two dimensional array d[0..m][0..n], where d[i][j] denotes the edit distance between the i-length prefix of x and j-length prefix of y.
The computation of array d is done using dynamic programming, which uses the following recurrence:
d[i][0] = i , for i <= m
d[0][j] = j , for j <= n
d[i][j] = d[i - 1][j - 1], if s[i] == w[j],
d[i][j] = min(d[i - 1][j] + 1, d[i][j - 1] + 1, d[i - 1][j - 1] + 1), otherwise.
answer of LCS if m>n, m-dp[m][m-n]

given array A, form array M such that sum of products (a1*m1+...+an*mn) is maximum

I gave an interview recently where I was asked the following algorithmic question. I am not able to come to an O(n) solution nor I was able to find the problem doing google.
Given an array A[a_0 ... a_(n-1)] of integers (+ve and -ve). Form an
array M[m_0 ... m_(n-1)] where m_0 = 2 and m_i in [2,...,m_(i-1)+1]
such that sum of products is maximum i.e. we have to maximize a_0*m_0
+ a_1*m_1 + ... + a_(n-1)*m_(n-1)
Examples
input {1,2,3,-50,4}
output {2,3,4,2,3}
input {1,-1,8,12}
output {2,3,4,5}
My O(n^2) solution was to start with m_0=2 and keep on incrementing by 1 as long as a_i is +ve. If a_i < 0 we have to consider all m_i from 2 to m_i-1 + 1 and see which one produces max sum of products.
Please suggest a linear time algorithm.
Suppose you have the following array:
1, 1, 2, -50, -3, -4, 6, 7, 8.
At each entry, we can either continue with our incrementing progression or reset the value to a lower value.
Here there can be only two good options. Either we would choose the maximum possible value for the current entry or the minimum possible(2). (proof towards the end)
Now it is clear that 1st 3 entries in our output shall be 2, 3 and 4 (because all the numbers so far are positive and there is no reason to reset them to 2 (a low value).
When a negative entry is encountered, compute the sum:
-(50 + 3 + 4) = -57.
Next compute the similar sum for succeeding +ve contiguous numbers.
(6 + 7 + 8) = 21.
Since 57 is greater than 21, it makes sense to reset the 4th entry to 2.
Again compute the sum for negative entries:
-(3 + 4) = -7.
Now 7 is less than 21, hence it makes sense not to reset any further because maximum product shall be obtained if positive values are high.
The output array thus shall be:
2, 3, 4, 2, 3, 4, 5, 6, 7
To make this algorithm work in linear time, you can pre-compute the array of sums that shall be required in computations.
Proof:
When a negative number is encountered, then we can either reset the output value to low value (say j) or continue with our increment (say i).
Say there are k -ve values and m succeeding positive values.
If we reset the value to j, then the value of product for these k -ve values and m +ve values shall be equal to:
- ( (j-2+2)*a1 + (j-2+3)*a2 + ... + (j-2+k+1)*ak ) + ( (j-2+k+2)*b1 + (j-2+k+3)*b2 + ... + (j-2+k+m+1)*am )
If we do not reset the value to 2, then the value of product for these k -ve values and m +ve values shall be equal to:
- ( (i+2)*a1 + (i+3)*a2 + (i+4)*a3 ... + (i+k+1)*ak ) + ( (i+k+2)*b1 + (i+k+3)*b2 + ... + (i+k+m+1)*am )
Hence the difference between the above two expressions is:
(i-j+2)* ( sum of positive values - sum of negative values )
Either this number can be positive or negative. Hence we shall tend to make j either as high as possible (M[i-1]+1) or as low as possible (2).
Pre-computing array of sums in O(N) time
Edited: As pointed out by Evgeny Kluev
Traverse the array backwards.
If a negative element is encountered, ignore it.
If a positive number is encountered, make suffix sum equal to that value.
Keep adding the value of elements to the sum till it remains positive.
Once the sum becomes < 0, note this point. This is the point that separates our decision of resetting to 2 and continuing with increment.
Ignore all negative values again till you reach a positive value.
Keep repeating till end of array is reached.
Note: While computing the suffix sum, if we encounter a zero value, then there can be multiple such solutions.
Thanks to Abhishek Bansal and Evgeny Kluevfor for the pseudo-code.
Here is the code in Java.
public static void problem(int[] a, int[] m) {
int[] sum = new int[a.length];
if(a[a.length-1] > 0)
sum[a.length-1] = a[a.length-1];
for(int i=a.length-2; i >=0; i--) {
if(sum[i+1] == 0 && a[i] <= 0) continue;
if(sum[i+1] + a[i] > 0) sum[i] = sum[i+1] + a[i];
}
//System.out.println(Arrays.toString(sum));
m[0] = 2;
for(int i=1; i < a.length; i++) {
if(sum[i] > 0) {
m[i] = m[i-1]+1;
} else {
m[i] = 2;
}
}
}

Resources