Julia, function to replicate "rbinom()" in R - random

I have dug around and googled but not found an example. I'm sure Julia has a powerful function (in base?) to generate random binomial (bernoulli?) "successes" with a given probability. I can't find it or figure out how to do the equivalent to in Julia:
> rbinom(20,1,0.3)
[1] 1 1 1 0 0 0 1 1 0 0 0 0 1 1 0 0 0 1 0 0
Thx. J

You can use Distributions and the rand function for this. Any distribution can be passed to rand. To replicate what you want:
julia> using Distributions
julia> p = Binomial(1, 0.3) # first arg is number of trials, second is probability of success
Binomial{Float64}(n=1, p=0.3)
julia> rand(p, 20)
20-element Array{Int64,1}:
0
1
1
0
1
0
0
1
0
1
1
1
0
0
1
0
1
0
0
1

Related

Clustering a boolean matrix in Matlab

Suppose we have a Boolean matrix such as the following:
0 0 1 0 0 1 0
1 1 0 0 1 0 0
0 0 0 0 0 1 1
0 0 0 0 0 1 0
0 0 0 0 0 1 1
interpreted this way: each row is a fruit and each column is a person. A '1' in position (i, j) indicates that person j would like to eat fruit i.
I would like to 'cluster' this matrix, creating sub-matrices that indicate subsets of people competing for subsets of fruit. In the example above I would like to see in output:
0 0 1 0 0 1 0
0 0 0 0 0 0 0
0 0 0 0 0 1 1
0 0 0 0 0 1 0
0 0 0 0 0 1 1
and
0 0 0 0 0 0 0
1 1 0 0 1 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0
Is there a simple way to do this, for example, in Matlab?
Thanks.
The description is way too informal and engineering something based on a single example is probably not a good idea.
Howewer: if the example just shows a 2 partition (which is my interpretation), this can easily achieved by:
- Create undirected graph G with one vertex for each row
- Iterate over all "N over 2" row-pairs (= nested i,j loop skipping symmetries)
- If the pair (rowA, rowB) shares some 1 in a column -> add edge (rowA, rowB) to G
- Compute all "connected components" of G
Any sane graph-lib will provide the primitives needed.

How to apply not operator to all matrix elements in Julia?

I need to apply "not" operator to matrix of zeros and ones in Julia.
In Matlab I would do this:
A=not(B);
In Julia I tried doing this:
A = .~ B;
and
A = .! B;
It should turn zeros to ones and ones to zeros but I get error as a result or all matrix elements are some negative numbers that I didn't enter.
Thanks in advance!
The issue with A = .!B is that logical negation, !(::Int64), isn't defined for integers. This makes sense: What should, say, !3 reasonably give?
Since you want to perform a logical operation, is there a deeper reason why you are working with integers to begin with?
You could perhaps work with a BitArray instead which is vastly more efficient and should behave like a regular Array in most operations.
You can easily convert your integer matrix to a BitArray. Afterwards, applying a logical not works as expected.
julia> A = rand(0:1, 5,5)
5×5 Array{Int64,2}:
0 0 0 1 1
0 1 0 0 1
0 1 1 1 0
1 1 0 0 0
1 1 1 0 0
julia> B = BitArray(A)
5×5 BitArray{2}:
0 0 0 1 1
0 1 0 0 1
0 1 1 1 0
1 1 0 0 0
1 1 1 0 0
julia> .!B
5×5 BitArray{2}:
1 1 1 0 0
1 0 1 1 0
1 0 0 0 1
0 0 1 1 1
0 0 0 1 1
The crucial part here is that the element type (eltype) of a BitArray is Bool, for which negation is obviously well defined. In this sense, you could also use B = Bool.(A) to convert all the elements to booleans.
For a general solution to going from A where A is a matrix of numbers to a boolean matrix with true values where there were zeros and false values elsewhere, you can do this:
julia> A = rand(0:3, 5, 5)
5×5 Array{Int64,2}:
1 0 1 0 3
2 0 1 1 0
2 1 1 3 1
1 0 3 0 3
1 3 3 1 2
julia> (!iszero).(A)
5×5 BitArray{2}:
1 0 1 0 1
1 0 1 1 0
1 1 1 1 1
1 0 1 0 1
1 1 1 1 1
To break down what's going on here:
iszero is a predicate that tests if a scalar value is zero
!iszero is a predicate that returns if a scalar value is not zero
(!iszero).(A) broadcasts the !iszero function over the matrix A
This returns a BitArray with the desired pattern of zeros (falses) and ones (trues). Note that in an array context, false prints as 0 and true prints as 1 (they are numerically equal). You can also compare with the number 0 like this:
julia> A .!= 0
5×5 BitArray{2}:
1 0 1 0 1
1 0 1 1 0
1 1 1 1 1
1 0 1 0 1
1 1 1 1 1
You can also roll your own:
not(x) = (x |> Bool |> !) |> Float64
defines a method that will convert x to boolean, apply not, and then convert the result back to numbers. not.(A) will act element-wise on the array A. Here |> redirects the output to the next method and works with broadcasting.
While not conceptually the cleanest, A=1.-B will do what you want. The problem with ~ is that it is performing a bitwise not on integers, which produces negative numbers. Not sure what is wrong wiht ! except it maybe should be !.B

APL find frequency of elements in a matrix

I have this piece of code
((⍳3)∘.+(⍳2))
which generates the following matrix
2 3
3 4
4 5
I want to find the occurrence of each unique element in the result i.e occurrence of 2,3,4,5 in the result.
I tried using "∘.=" with the matrix itself and then reshaping such that elements of each sub matrix is transformed into a row
using
6 6⍴ ((⍳3)∘.+(⍳2))∘.=((⍳3)∘.+(⍳2))
which gives the following result
1 0 0 0 0 0 for 2
0 1 1 0 0 0 for 3
0 1 1 0 0 0 for 3
0 0 0 1 1 0 for 4
0 0 0 1 1 0 for 4
0 0 0 0 0 1 for 5
as you can see it still contains the sum for duplicate items, and I'm lost as of now.
Any help will be appreciated.
You should do ∘.= between the unique elements in the matrix and a flat vector of all elements, like:
m ← ((⍳3)∘.+(⍳2))
(∪,m) ∘.= ,m
1 0 0 0 0 0
0 1 1 0 0 0
0 0 0 1 1 0
0 0 0 0 0 1
Then just do +/ on it to get the frequencies of ∪,m
+/ (∪,m) ∘.= ,m
1 2 2 1
∪,m
2 3 4 5
(Tested on GNU APL.)
Dyalog APL version 14.0 has the ⌸ Key operator exactly for this, you just need to ravel your data:
{≢⍵}⌸ ,((⍳3)∘.+(⍳2))
1 2 2 1
Try it online!
You can even use the left argument of ⌸'s operand function to create a table:
{⍺,≢⍵}⌸ ,((⍳3)∘.+(⍳2))
2 1
3 2
4 2
5 1
Try it online!

How to create a symmetric matrix of 1's and 0's with constant row and column sum

I'm trying to find an elegant algorithm for creating an N x N matrix of 1's and 0's, under the restrictions:
each row and each column must sum to Q (to be picked freely)
the diagonal must be 0's
the matrix must be symmetrical.
It is not strictly necessary for the matrix to be random (both random and non-random solutions are interesting, however), so for Q even, simply making each row a circular shift of the vector
[0 1 1 0 ... 0 0 0 ... 0 1 1] (for Q=4)
is a valid solution.
However, how to do this for Q odd? Or how to do it for Q even, but in a random fashion?
For those curious, I'm trying to test some phenomena on abstract networks.
I apologize if this has already been answered before, but none of the questions I could find had the symmetric restriction, which seems to make it much more complicated. I don't have a proof that such a matrix always exists, but I do assume so.
The object that you're trying to construct is known more canonically as an undirected d-regular graph (where d = Q). By the handshaking theorem, N and Q cannot both be odd. If Q is even, then connect vertex v to v + k modulo N for k in {-Q/2, -Q/2 + 1, ..., -1, 1, ..., Q/2 - 1, Q/2}. If Q is odd, then N is even. Construct a (Q - 1)-regular graph as before and then add connections from v to v + N/2 modulo N.
If you want randomness, there's a Markov chain whose limiting distribution is uniform on d-regular graphs. You start with any d-regular graph. Repeatedly pick vertices v, w, x, y at random. Whenever the induced subgraph looks like
v----w
x----y ,
flip it to
v w
| |
x y .
You can perhaps always follow your circular shift algorithm, when possible.
The only condition you need to follow while using the circular shift algorithm is to maintain the symmetric nature in the first row.
i.e. keeping Q 1's in the first row so that Q[0,1] to Q[0,N-1] {Assuming 0 indexed rows and cols, Q[0,0] is 0.} is symmetric, a simple example being 110010011.
Hence, N = 10, Q = 5, you can get many possible arrangements such as:
0 1 0 0 1 1 1 0 0 1
1 0 1 0 0 1 1 1 0 0
0 1 0 1 0 0 1 1 1 0
0 0 1 0 1 0 0 1 1 1
1 0 0 1 0 1 0 0 1 1
1 1 0 0 1 0 1 0 0 1
1 1 1 0 0 1 0 1 0 0
0 1 1 1 0 0 1 0 1 0
0 0 1 1 1 0 0 1 0 1
1 0 0 1 1 1 0 0 1 0
or
0 1 1 0 0 1 0 0 1 1
1 0 1 1 0 0 1 0 0 1
1 1 0 1 1 0 0 1 0 0
0 1 1 0 1 1 0 0 1 0
0 0 1 1 0 1 1 0 0 1
1 0 0 1 1 0 1 1 0 0
0 1 0 0 1 1 0 1 1 0
0 0 1 0 0 1 1 0 1 1
1 0 0 1 0 0 1 1 0 1
1 1 0 0 1 0 0 1 1 0
But as you can see for odd N(that means even N-1) and odd Q there can't be any such symmetric distribution.. Hope it helped.

How can I find a solution of binary matrix equation AX = B?

Given an m*n binary matrix A, m*p binary matrix B, where n > m what is an efficient algorithm to compute X such that AX=B?
For example:
A =
1 1 0 0 1 1 0 1 0 0
1 1 0 0 1 0 1 0 0 1
0 1 1 0 1 0 1 0 1 0
1 1 1 1 1 0 0 1 1 0
0 1 1 0 1 0 1 1 1 0
B =
0 1 0 1 1 0 1 1 0 1 0 0 1 0
0 0 1 0 1 1 0 0 0 1 0 1 0 0
0 1 1 0 0 0 1 1 0 0 1 1 0 0
0 0 1 1 1 1 0 0 0 1 1 0 0 0
1 0 0 1 0 0 1 0 1 0 0 1 1 0
Note, when I say binary matrix I mean matrix defined over the field Z_2, that is, where all arithmetic is mod 2.
If it is of any interest, this is a problem I am facing in generating suitable matrices for a random error correction code.
You can do it with row reduction: Place B to the right of A, and then swap rows (in the whole thing) to get a 1 in row 0, col 0; then xor that row to any other row that has a '1' in column 0, so you have only a single 1 in column 0. Then move to the next column; if [1,1] is zero then swap row 1 with a later row that has a 1 there, then xor rows to make it the only 1 in the column. Assuming 'A' is a square matrix and a solution exists, then you eventually have converted A to unity, and B is replaced with the solution to Ax=B.
If n > m, you have a system with more unknowns than equations, so you can solve for some of the unknowns, and set the others to zero. During the row reduction, if there are no values in a column which have a '1' to use (below the rows already reduced) you can skip that column and make the corresponding unknown zero (you can do this at most n-m times).

Resources