Matrix element replacement Octave - matrix

I've a matrix
A = [0.125680 0.543107 0.40088]
If a number in that matrix is greater or equal .5 I want to replace it with 1.
If a number is less than .5 I want to replace it with 0.
So my final matrix will be B = [0 1 0]. How will I do that without for loop?

Get it. It simple.
B = A >= .5;

Related

Create a Program Displaying Number of Numbers Greater Than 0.8 in Matrix

New programmer here,
I got a task to make a program that picks and displays how many numbers greater than 0.8 there is in a random matrix 5x6 with 2 for loops and would appreciate help if someone knows how to make it.
Thanks in advance.
I'm writing in Octave,
this is how the code starts:
clear
clc
rand(5,6);
for row =….
Hopefully, this MATLAB code carries over to Octave. The first method uses two for-loops to scan/traverse through the matrix. If the value retrieved from the matrix during scanning Matrix(Row,Column) is greater than 0.8 then the variable Number is incremented. This process repeats until the whole matrix is checked. Numbers_Greater_Than is used to store all the numbers that are greater than 0.8.
Using For-Loops:
clear;
clc;
Matrix = rand(5,6);
Numbers_Greater_Than = [];
[Number_Of_Rows,Number_Of_Columns] = size(Matrix);
Number = 0;
for Row = 1: Number_Of_Rows
for Column = 1: Number_Of_Columns
if(Matrix(Row,Column) > 0.8)
Number = Number + 1;
Numbers_Greater_Than = [Numbers_Greater_Than Matrix(Row,Column)];
end
end
end
fprintf("The are %d numbers greater than 0.8 in the matrix\n",Number);
Numbers_Greater_Than
Extension:
Alternatively:
Scanning through the elements using single number indexing.
clear;
clc;
Matrix = rand(5,6);
Number = 0;
for Element = 1: numel(Matrix)
if(Matrix(Element) > 0.8)
Number = Number + 1;
end
end
fprintf("The are %d numbers greater than 0.8 in the matrix\n",Number);
Using a Logical Array:
This method creates a logical array based on the condition > 0.8. The Logical_Array is set to "1" when the condition is true and set to "0" when the condition is false. By taking the sum afterwards the number of times that the condition is true in the matrix can be counted.
clear;
clc;
Matrix = rand(5,6);
Logical_Array = Matrix > 0.8;
Number = sum(Logical_Array,'all');
fprintf("The are %d numbers greater than 0.8 in the matrix\n",Number);
Ran using MATLAB R2019b

Change the size of matrix?

I have an image that I'm converting to a binary matrix with size (n,m)
I need a MATLAB function to reshape the size of this matrix to be (n,n).
Otherwise, would it be possible to make the size of image be (n,n) versus the initial (n,m)?
It's actually quite easy. Supposing that your matrix is A and is n x m. I'm assuming you'll want to zero-pad the matrix, meaning that the extra elements would be set to 0. You would simply do this:
[n,m] = size(A);
A(:,m+1:n) = 0;
The first line of code finds the rows n and columns m of the matrix A. Next, we will make all of the rows from the (m+1)th column to the nth column all 0 which effectively makes this a n x n matrix.
Example Run
Here's an example with a 4 x 2 matrix A, and the process requires that we change the size so that A is 4 x 4.
>> A = rand(4,2)
A =
0.9575 0.9572
0.9649 0.4854
0.1576 0.8003
0.9706 0.1419
>> [n,m] = size(A);
>> A(:,m+1:n) = 0
A =
0.9575 0.9572 0 0
0.9649 0.4854 0 0
0.1576 0.8003 0 0
0.9706 0.1419 0 0
Minor Note
This assumes that the number of rows is greater than the number of columns... I'm assuming that this is a requirement on your end. If this is not the case, then the above code won't work. You can make the algorithm agnostic whereas you would zero-pad the matrix in the dimension that has the least amount of entries, but I'll leave that to you as an exercise.

For each element in X find index of largest without going over in Y

I'm looking for a way to improve the performance of the following algorithm. Given two arrays X and Y.
For each element of X find the index of the largest value in Y that does not exceed the value of the element in X. It is safe to assume X and Y are monotonically increasing (sorted) and that Y(1) is less than every value in X.
Also X is generally much larger than Y.
As an example given the following.
X = [0.2, 1.5, 2.2, 2.5, 3.5, 4.5, 5.5, 5.8, 6.5];
Y = [0.0, 1.0, 3.0, 4.0, 6.0];
I would expect the output to be
idx = [1, 2, 2, 2, 3, 4, 4, 4, 5]
The fastest way I've come up with is the function below which fails to take advantage of the fact that the lists are sorted and uses a for loop to step through one of the arrays. This gives a valid solution but on the experiments I'm using this function for, nearly 27 minutes are spent here out of a total 30 minutes the analysis takes to run.
function idx = matchintervals(X,Y)
idx = zeros(size(X));
for i = 1:length(Y)-1
idx(X >= Y(i) & X < Y(i+1)) = i;
end
idx(X >= Y(end)) = length(Y);
end
Any help is greatly appreciated.
If you're looking for the fastest solution, it might end up being a simple while loop like so (which takes advantage of the fact that the arrays are sorted):
X = [0.2, 1.5, 2.2, 2.5, 3.5, 4.5, 5.5, 5.8, 6.5];
Y = [0.0, 1.0, 3.0, 4.0, 6.0];
xIndex = 1;
nX = numel(X);
yIndex = 1;
nY = numel(Y);
index = zeros(size(X))+nY; % Prefill index with the largest index in Y
while (yIndex < nY) && (xIndex <= nX)
if X(xIndex) < Y(yIndex+1)
index(xIndex) = yIndex;
xIndex = xIndex+1;
else
yIndex = yIndex+1;
end
end
>> index
index =
1 2 2 2 3 4 4 4 5
This loop will iterate a maximum of numel(X)+numel(Y)-1 times, potentially fewer if there are many values in X that are greater than the largest value in Y.
TIMINGS: I ran some timings with the sample data from a comment. Here are the results sorted from fastest to slowest:
X = 1:3:(4e5);
Y = 0:20:(4e5-1);
% My solution from above:
tElapsed =
0.003005977477718 seconds
% knedlsepp's solution:
tElapsed =
0.006939387719075 seconds
% Divakar's solution:
tElapsed =
0.011801273498343 seconds
% H.Muster's solution:
tElapsed =
4.081793325423575 seconds
A one-liner, but probably slower than the solution of gnovice:
idx = sum(bsxfun(#ge, X, Y'));
I had a similar idea as Divakar. This basically finds the insertion points of the values in X after the values of Y using the stable sort.
Both X and Y need to be sorted for this to work correctly!
%// Calculate the entry points
[~,I] = sort([Y,X]);
whereAreXs = I>numel(Y);
idx = find(whereAreXs)-(1:numel(X));
You can view the values of X and the corresponding values of Y that don't exceed the X-values via:
%%// Output:
disp([X;Y(idx)]);
Using sort and few masks -
%// Concatenate X and Y and find the sorted indices
[sXY,sorted_id] = sort([X Y]);
%// Take care of sorted_id for identical values between X and Y
dup_id = find(diff(sXY)==0);
tmp = sorted_id(dup_id);
sorted_id(dup_id) = sorted_id(dup_id+1);
sorted_id(dup_id+1) = tmp;
%// Mask of Y elements in XY array
maskY = sorted_id>numel(X);
%// Find island lengths of Y elements in concatenated XY array
diff_maskY = diff([false maskY false]);
island_lens = find(diff_maskY ==-1) - find(diff_maskY ==1);
%// Create a mask of double datatype with 1s where Y intervals change
mask_Ys = [ false maskY(1:end-1)];
mask_Ysd = double(mask_Ys(~maskY));
%// Incorporate island lengths to change the 1s by offsetted island lengths
valid = mask_Ysd==1;
mask_Ysd(valid) = mask_Ysd(valid) + island_lens(1:sum(valid)) - 1;
%// Finally perform cumsum to get the output indices
idx = cumsum(mask_Ysd);

Psuedo-Random Variable

I have a variable, between 0 and 1, which should dictate the likelyhood that a second variable, a random number between 0 and 1, is greater than 0.5. In other words, if I were to generate the second variable 1000 times, the average should be approximately equal to the first variable's value. How do I make this code?
Oh, and the second variable should always be capable of producing either 0 or 1 in any condition, just more or less likely depending on the value of the first variable. Here is a link to a graph which models approximately how I would like the program to behave. Each equation represents a separate value for the first variable.
You have a variable p and you are looking for a mapping function f(x) that maps random rolls between x in [0, 1] to the same interval [0, 1] such that the expected value, i.e. the average of all rolls, is p.
You have chosen the function prototype
f(x) = pow(x, c)
where c must be chosen appropriately. If x is uniformly distributed in [0, 1], the average value is:
int(f(x) dx, [0, 1]) == p
With the integral:
int(pow(x, c) dx) == pow(x, c + 1) / (c + 1) + K
one gets:
c = 1/p - 1
A different approach is to make p the median value of the distribution, such that half of the rolls fall below p, the other half above p. This yields a different distribution. (I am aware that you didn't ask for that.) Now, we have to satisfy the condition:
f(0.5) == pow(0.5, c) == p
which yields:
c = log(p) / log(0.5)
With the current function prototype, you cannot satisfy both requirements. Your function is also asymmetric (f(x, p) != f(1-x, 1-p)).
Python functions below:
def medianrand(p):
"""Random number between 0 and 1 whose median is p"""
c = math.log(p) / math.log(0.5)
return math.pow(random.random(), c)
def averagerand(p):
"""Random number between 0 and 1 whose expected value is p"""
c = 1/p - 1
return math.pow(random.random(), c)
You can do this by using a dummy. First set the first variable to a value between 0 and 1. Then create a random number in the dummy between 0 and 1. If this dummy is bigger than the first variable, you generate a random number between 0 and 0.5, and otherwise you generate a number between 0.5 and 1.
In pseudocode:
real a = 0.7
real total = 0.0
for i between 0 and 1000 begin
real dummy = rand(0,1)
real b
if dummy > a then
b = rand(0,0.5)
else
b = rand(0.5,1)
end if
total = total + b
end for
real avg = total / 1000
Please note that this algorithm will generate average values between 0.25 and 0.75. For a = 1 it will only generate random values between 0.5 and 1, which should average to 0.75. For a=0 it will generate only random numbers between 0 and 0.5, which should average to 0.25.
I've made a sort of pseudo-solution to this problem, which I think is acceptable.
Here is the algorithm I made;
a = 0.2 # variable one
b = 0 # variable two
b = random.random()
b = b^(1/(2^(4*a-1)))
It doesn't actually produce the average results that I wanted, but it's close enough for my purposes.
Edit: Here's a graph I made that consists of a large amount of datapoints I generated with a python script using this algorithm;
import random
mod = 6
div = 100
for z in xrange(div):
s = 0
for i in xrange (100000):
a = (z+1)/float(div) # variable one
b = random.random() # variable two
c = b**(1/(2**((mod*a*2)-mod)))
s += c
print str((z+1)/float(div)) + "\t" + str(round(s/100000.0, 3))
Each point in the table is the result of 100000 randomly generated points from the algorithm; their x positions being the a value given, and their y positions being their average. Ideally they would fit to a straight line of y = x, but as you can see they fit closer to an arctan equation. I'm trying to mess around with the algorithm so that the averages fit the line, but I haven't had much luck as of yet.

generate random numbers within a range with different probabilities

How can i generate a random number between A = 1 and B = 10 where each number has a different probability?
Example: number / probability
1 - 20%
2 - 20%
3 - 10%
4 - 5%
5 - 5%
...and so on.
I'm aware of some hard-coded workarounds which unfortunately are of no use with larger ranges, for example A = 1000 and B = 100000.
Assume we have a
Rand()
method which returns a random number R, 0 < R < 1, can anyone post a code sample with a proper way of doing this ? prefferable in c# / java / actionscript.
Build an array of 100 integers and populate it with 20 1's, 20 2's, 10 3's, 5 4's, 5 5's, etc. Then just randomly pick an item from the array.
int[] numbers = new int[100];
// populate the first 20 with the value '1'
for (int i = 0; i < 20; ++i)
{
numbers[i] = 1;
}
// populate the rest of the array as desired.
// To get an item:
// Since your Rand() function returns 0 < R < 1
int ix = (int)(Rand() * 100);
int num = numbers[ix];
This works well if the number of items is reasonably small and your precision isn't too strict. That is, if you wanted 4.375% 7's, then you'd need a much larger array.
There is an elegant algorithm attributed by Knuth to A. J. Walker (Electronics Letters 10, 8 (1974), 127-128; ACM Trans. Math Software 3 (1977), 253-256).
The idea is that if you have a total of k * n balls of n different colors, then it is possible to distribute the balls in n containers such that container no. i contains balls of color i and at most one other color. The proof is by induction on n. For the induction step pick the color with the least number of balls.
In your example n = 10. Multiply the probabilities with a suitable m such that they are all integers. So, maybe m = 100 and you have 20 balls of color 0, 20 balls of color 1, 10 balls of color 2, 5 balls of color 3, etc. So, k = 10.
Now generate a table of dimension n with each entry being a probability (the ration of balls of color i vs the other color) and the other color.
To generate a random ball, generate a random floating-point number r in the range [0, n). Let i be the integer part (floor of r) and x the excess (r – i).
if (x < table[i].probability) output i
else output table[i].other
The algorithm has the advantage that for each random ball you only make a single comparison.
Let me work out an example (same as Knuth).
Consider simulating throwing a pair of dice.
So P(2) = 1/36, P(3) = 2/36, P(4) = 3/36, P(5) = 4/36, P(6) = 5/36, P(7) = 6/36, P(8) = 5/36, P(9) = 4/36, P(10) = 3/36, P(11) = 2/36, P(12) = 1/36.
Multiply by 36 * 11 to get 393 balls, 11 of color 2, 22 of color 3, 33 of color 4, …, 11 of color 12.
We have k = 393 / 11 = 36.
Table[2] = (11/36, color 4)
Table[12] = (11/36, color 10)
Table[3] = (22/36, color 5)
Table[11] = (22/36, color 5)
Table[4] = (8/36, color 9)
Table[10] = (8/36, color 6)
Table[5] = (16/36, color 6)
Table[9] = (16/36, color 8)
Table[6] = (7/36, color 8)
Table[8] = (6/36, color 7)
Table[7] = (36/36, color 7)
Assuming that you have a function p(n) that gives you the desired probability for a random number:
r = rand() // a random number between 0 and 1
for i in A to B do
if r < p(i)
return i
r = r - p(i)
done
A faster way is to create an array of (B - A) * 100 elements and populate it with numbers from A to B such that the ratio of the number of each item occurs in the array to the size of the array is its probability. You can then generate a uniform random number to get an index to the array and directly access the array to get your random number.
Map your uniform random results to the required outputs according to the probabilities.
E.g., for your example:
If `0 <= Round() <= 0.2`: result = 1.
If `0.2 < Round() <= 0.4`: result = 2.
If `0.4 < Round() <= 0.5`: result = 3.
If `0.5 < Round() <= 0.55`: result = 4.
If `0.55 < Round() <= 0.65`: result = 5.
...
Here's an implementation of Knuth's Algorithm. As discussed by some of the answers it works by
1) creating a table of summed frequencies
2) generates a random integer
3) rounds it with ceiling function
4) finds the "summed" range within which the random number falls and outputs original array entity based on it
Inverse Transform
In probability speak, a cumulative distribution function F(x) returns the probability that any randomly drawn value, call it X, is <= some given value x. For instance, if I did F(4) in this case, I would get .6. because the running sum of probabilities in your example is {.2, .4, .5, .55, .6, .65, ....}. I.e. the probability of randomly getting a value less than or equal to 4 is .6. However, what I actually want to know is the inverse of the cumulative probability function, call it F_inv. I want to know what is the x value given the cumulative probability. I want to pass in F_inv(.6) and get back 4. That is why this is called the inverse transform method.
So, in the inverse transform method, we are basically trying to find the interval in the cumulative distribution in which a random Uniform (0,1) number falls. This works out to the algorithm that perreal and icepack posted. Here is another way to state it in terms of the cumulative distribution function
Generate a random number U
for x in A .. B
if U <= F(x) then return x
Note that it might be more efficient to have the loop go from B to A and check if U >= F(x) if the smaller probabilities come at the beginning of the distribution

Resources