Julia: How to execute functions in parallel? - parallel-processing

I want to run functions in parallel. These functions are executed many times in a loop.
coordSys = SharedArray{Bool}([true,false,true,true]);
dir = SharedArray{Int8}([1,2,3,2]);
load = SharedArray{Float64}([8,-7.5,7,-8.5]);
L = SharedArray{Float64}([400,450,600,500]);
r = SharedArray{Float64}([0.0 0.0 1.0; 0.0 -1.0 0.0; 1.0 0.0 0.0
0.0 0.0 1.0; 0.0 -1.0 0.0; 1.0 0.0 0.0
0.0 0.0 1.0; 0.0 -1.0 0.0; 1.0 0.0 0.0
0.0 0.0 1.0; 0.0 -1.0 0.0; 1.0 0.0 0.0]);
Obviously these vectors will be huge, but for simplicity I just put this limited size.
Operation without parallel computing:
function unifLoad(coordSys,dir,load,L,ri)
if coordSys == true
if dir == 1
Q = [load;0;0];
elseif dir == 2
Q = [0;load;0];
elseif dir == 3
Q = [0;0;load];
end
q = ri*Q; #matrix multiplication
P = q[1]*L/2;
V = q[2]*L/2;
M = -q[3]*L*L/12;
f = [P;V;M];
else
f = [1.0;1.0;1.0];
end
return f
end
running the loop:
var = zeros(12)
for i = 1:length(L)
var[3*(i-1)+1:3*i] = unifLoad(coordSys[i],dir[i],load[i],L[i],r[3*(i-1)+1:3*i,:]);
end
The returned value is:
var
12-element Array{Float64,1}:
0.0
0.0
-1.06667e5
1.0
1.0
1.0
2100.0
0.0
-0.0
0.0
2125.0
-0.0
Operation with parallel computing
I've been trying to implement the same function in parallel, but without getting the same results.
# addprocs(3)
#everywhere function unifLoad_Parallel(coordSys,dir,load,L,ri)
if coordSys == true
if dir == 1
Q = [load;0;0];
elseif dir == 2
Q = [0;load;0];
elseif dir == 3
Q = [0;0;load];
end
q = ri*Q; # Matrix multiplication (ri -> Array 3x3)
P = q[1]*L/2;
V = q[2]*L/2;
M = -q[3]*L*L/12;
f = [P;V;M];
else
f = [1.0;1.0;1.0];
end
return f
end
running the parallel loop:
var_parallel = SharedArray{Float64}(12);
#parallel for i = 1:length(L)
var_parallel[3*(i-1)+1:3*i] = unifLoad_Parallel(coordSys[i],dir[i],load[i],L[i],r[3*(i-1)+1:3*i,:]);
end
The returned value is:
var_parallel
12-element SharedArray{Float64,1}:
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0

On my Julia 0.6.3 the parallel code returns the same result so I am unable to reproduce the problem (also I do not encounter the issue #SalchiPapa reports).
However, I would like to note that this code actually should work faster with threads (I assume that the real problem is much larger). Here is the code you could use (I used an equivalent implementation to your which is a bit shorter - but the only significantly relevant change is that I wrap it in a function which provides dramatic performance gains). The crucial issue that all arrays except var are shared but only read. And var is written but only once at each entry and not read from. This is the case where it is safe to use threading which has a lower overhead.
Here is an example code (you have to define JULIA_NUM_TREADS environment variable before starting Julia and set it to number of threads you want - most probably 4 is what you want):
using Base.Threads
function experiment()
coordSys = [true,false,true,true];
dir = [1,2,3,2];
load = [8,-7.5,7,-8.5];
L = [400,450,600,500];
r = [0.0 0.0 1.0; 0.0 -1.0 0.0; 1.0 0.0 0.0
0.0 0.0 1.0; 0.0 -1.0 0.0; 1.0 0.0 0.0
0.0 0.0 1.0; 0.0 -1.0 0.0; 1.0 0.0 0.0
0.0 0.0 1.0; 0.0 -1.0 0.0; 1.0 0.0 0.0];
unifLoad(coordSys,dir,load,L,r, i) =
coordSys ? load * L * r[3*(i-1)+1:3*i, dir] .* [0.5, 0.5, -L/12] : [1.0, 1.0, 1.0]
var = zeros(12)
#threads for i = 1:length(L)
var[3*(i-1)+1:3*i] = unifLoad(coordSys[i],dir[i],load[i],L[i],r,i);
end
var
end
Also here is a bit simplified code for parallel processing using similar ideas:
coordSys = SharedArray{Bool}([true,false,true,true]);
dir = SharedArray{Int8}([1,2,3,2]);
load = SharedArray{Float64}([8,-7.5,7,-8.5]);
L = SharedArray{Float64}([400,450,600,500]);
r = SharedArray{Float64}([0.0 0.0 1.0; 0.0 -1.0 0.0; 1.0 0.0 0.0
0.0 0.0 1.0; 0.0 -1.0 0.0; 1.0 0.0 0.0
0.0 0.0 1.0; 0.0 -1.0 0.0; 1.0 0.0 0.0
0.0 0.0 1.0; 0.0 -1.0 0.0; 1.0 0.0 0.0]);
#everywhere unifLoad(coordSys,dir,load,L,r,i) =
coordSys ? load * L * r[3*(i-1)+1:3*i, dir] .* [0.5, 0.5, -L/12] : [1.0, 1.0, 1.0]
vcat(pmap(i -> unifLoad(coordSys[i],dir[i],load[i],L[i],r,i), 1:length(L))...)
Here pmap is mostly used to simplify the code so that you do not need #sync.

Related

ARIMA prediction always give zero values in python

I'm t**rying to predict stock market using ARIMA but the predicted values are always zero if the differencing value is greater than zero how should i solve this problem.
another question when using forecast funtion instead of predict function it gives me another result. **
this is my code
from statsmodels.tsa.arima.model import ARIMA
from sklearn.metrics import mean_absolute_error
import pandas as pd
df = pd.read_csv('D:\SBIN.csv', names = ['Date','Price'], header = 0, index_col = 0)
#series=pd.DataFrame(df)
print(len(df))
X = df.values
size = int(len(X) * 0.955)
print(size)
train = df[:size]
print(len(train))
test = df[size:]
history = [x for x in train.Price]
print(history)
predictions = list()
for t in range(len(test)):
history_df = pd.DataFrame(history)
model = ARIMA(history_df.astype(float), order=(1,1,1))
model_fit = model.fit()
print(model_fit.summary())
predicted = model_fit.predict(start=0,end=0)
yhat = predicted
predictions.append(yhat)
history.append(yhat)
predict=pd.DataFrame(predictions,index = test.index)
print(predict)
in other websites and papers they use history.append(test[i]) instead of history.append(yhat) but i think they should use the second as we should fit the model based on the predicted values not the original to predict the future.
the output of predict funtion is:
Date
12/17/2015 0.0
12/18/2015 0.0
12/21/2015 0.0
12/22/2015 0.0
12/23/2015 0.0
12/24/2015 0.0
12/28/2015 0.0
12/29/2015 0.0
12/30/2015 0.0
12/31/2015 0.0
1/1/2016 0.0
1/4/2016 0.0
1/5/2016 0.0
1/6/2016 0.0
1/7/2016 0.0
1/8/2016 0.0
1/11/2016 0.0
1/12/2016 0.0
1/13/2016 0.0
1/14/2016 0.0
1/15/2016 0.0
1/18/2016 0.0
1/19/2016 0.0
while the output of forecast is:
12/17/2015 227.398678
12/18/2015 227.343661
12/21/2015 227.374336
12/22/2015 227.357233
12/23/2015 227.366769
12/24/2015 227.361453
12/28/2015 227.364416
12/29/2015 227.362761
12/30/2015 227.363684
12/31/2015 227.363169
1/1/2016 227.363456
1/4/2016 227.363296
1/5/2016 227.363385
1/6/2016 227.363336
1/7/2016 227.363363
1/8/2016 227.363348
1/11/2016 227.363357
1/12/2016 227.363352
1/13/2016 227.363354
1/14/2016 227.363353
1/15/2016 227.363354
1/18/2016 227.363353
1/19/2016 227.363354
**trying to predict stock market using ARIMA but the predicted values are always zero if the differencing value is greater than zero how should i solve this problem.
another question when using forecast funtion instead of predict function it gives me another result. **

How to convert sparse matrix to dense matrix in Julia

How do you convert a sparse matrix to a dense matrix in Julia? According to this I should be able to use full or Matrix, however full is evidently not standard in the SparseArrays module, and when I try to use Matrix:
I = []
J = []
A = []
for i in 1:3
push!(I, i)
push!(J, i^2)
push!(A, sqrt(i))
end
sarr = sparse(I, J, A, 10, 10)
arr = Matrix(sarr)
I get this error:
Exception has occurred: MethodError
MethodError: no method matching zero(::Type{Any})
It is enough to do collect(sarr) or Matrix(sarr).
Note, however that your code uses untyped containers which is not recommended. Indexes in arrays are Ints so it should be:
I = Int[]
J = Int[]
A = Float64[]
for i in 1:3
push!(I, i)
push!(J, i^2)
push!(A, sqrt(i))
end
sarr = sparse(I, J, A, 10, 10)
Now you can do:
julia> collect(sarr)
10×10 Matrix{Float64}:
1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 1.41421 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.73205 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

Julia Parallel Distributed

I'm trying to run this code, but why I'm I getting these 2 rows in the Middle with 00000, can someone help me, to get that fixed, please?
using Distributed #Bereitstellung der Bibliothekee zur Parallelen Programieru
addprocs(2)
#everywhere using LinearAlgebra #Bereitstellung der LinearAlgebra Bibliotheke
#everywhere using DistributedArrays #Bereitstellung der DistributedArrays
#everywhere T =(zeros(n,n))
T[:,1].=10 #Randbedingungen T_links =10
T[:,end].=10 #Randbedingungen T_rechts =10
T = distribute(T; dist=(2,1))
#everywhere maxit = 100 #maximale Iterrationsanzahl
#everywhere function Poissons_2D(T)
for w in 1:maxit
#sync #distributed for p in 1:nworkers()
for i in 2:length(localindices(T)[1])-1
for j in 2:length(localindices(T)[2])-1
localpart(T)[i,j] = (1/4 * (localpart(T)[i-1,j] + localpart(T)[i+1,j] + localpart(T)[i,j-1] + localpart(T)[i,j+1]))
end
end
end
end
return T
end
Poissons_2D(T)
10×10 DArray{Float64,2,Array{Float64,2}}:
10.0 0.0 0.0 0.0 … 0.0 0.0 0.0 10.0
10.0 4.33779 2.00971 1.01077 1.01077 2.00971 4.33779 10.0
10.0 5.34146 2.69026 1.40017 1.40017 2.69026 5.34146 10.0
10.0 4.33779 2.00971 1.01077 1.01077 2.00971 4.33779 10.0
10.0 0.0 0.0 0.0 0.0 0.0 0.0 10.0
10.0 0.0 0.0 0.0 … 0.0 0.0 0.0 10.0
10.0 4.33779 2.00971 1.01077 1.01077 2.00971 4.33779 10.0
10.0 5.34146 2.69026 1.40017 1.40017 2.69026 5.34146 10.0
10.0 4.33779 2.00971 1.01077 1.01077 2.00971 4.33779 10.0
10.0 0.0 0.0 0.0 0.0 0.0 0.0 10.0
The first cleanup could look like this:
a =(zeros(10,10))
a[:,[1,end]] .= 10
a = distribute(a; dist=(nworkers(),1))
function Poissons_2D(a::DArray, maxit::Int=100)
for w in 1:maxit
#sync #distributed for p in 1:nworkers()
local_a = localpart(a)
local_ind = localindices(a)
for iix in 1:length(local_ind[1])
i = local_ind[1][iix]
(i==1 || i==size(a,1)) && continue
for j in local_ind[2][2:end-1]
local_a[iix,j] = (1/4 * (a[i-1,j] + a[i+1,j] + a[i,j-1] + a[i,j+1]))
end
end
end
end
a
end
Some remarks:
Do not use #everywhere in front of T - you do not want to define it on all workers
in Julia you use by convention T to denote parametric types so use a, or some T-like LaTeX symbol
However, your function takes values from all adjacent cells to calculate new values.
I do not know how do you plan to handle situation when the value does not exist yet.
In particular if each row requires value from the previous row and previous column it is not possible to parallelize this computation at all (because you need to wait for the previous value to get the next one).
julia> Poissons_2D(a)
10×10 DArray{Float64,2,Array{Float64,2}}:
10.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 10.0
10.0 4.99998 3.05213 2.20861 1.87565 1.87565 2.20862 3.05214 4.99999 10.0
10.0 6.9478 4.99994 3.90669 3.41834 3.41834 3.9067 4.99995 6.94781 10.0
10.0 7.7913 6.09315 4.99989 4.47269 4.4727 4.99991 6.09317 7.79131 10.0
10.0 8.12425 6.58148 5.52707 4.99987 4.99988 5.52709 6.58151 8.12427 10.0
10.0 8.12425 6.58148 5.52707 4.99987 4.99988 5.52709 6.58151 8.12427 10.0
10.0 7.7913 6.09316 4.99991 4.47271 4.47271 4.99992 6.09317 7.79131 10.0
10.0 6.94781 4.99995 3.90671 3.41835 3.41836 3.90672 4.99996 6.94782 10.0
10.0 4.99999 3.05214 2.20862 1.87566 1.87566 2.20863 3.05215 4.99999 10.0
10.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 10.0
I think the problem are the ranges of for with I and j. You range is from 2 to N-1, avoiding the extremes. It is right because you are missing the information to calculate them, because it is stored in a different process. However you need to transfer the limits information. In MPI, for instance you could send redundant information to avoid that, but in Distributed I am not sure. I see the cause, but the solution is not easy. At least I hope to have helped a little.

Parallel Computing with Julia #parallel and SharedArray

I have been trying to implement some parallel programming in Julia using #parallel and SharedArrays.
Xi = Array{Float64}([0.0, 450.0, 450.0, 0.0, 0.0, 450.0, 450.0, 0.0])
Yi = Array{Float64}([0.0, 0.0, 600.0, 600.0, 0.0, 0.0, 600.0, 600.0])
Zi = Array{Float64}([0.0, 0.0, 0.0, 0.0, 400.0, 400.0, 400.0, 400.0])
Xj = Array{Float64}([0.0, 450.0, 450.0, 0.0, 0.0, 450.0, 450.0, 0.0])
Yj = Array{Float64}([0.0, 0.0, 600.0, 600.0, 0.0, 0.0, 600.0, 600.0])
Zj = Array{Float64}([0.0, 0.0, 0.0, 0.0, 400.0, 400.0, 400.0, 400.0])
L = Array{Float64}([400.0, 400.0, 400.0, 400.0, 450.0, 600.0, 450.0, 600.0])
Rot = Array{Float64}([90.0, 90.0, 90.0, 90.0, 0.0, 0.0, 0.0, 0.0])
Obviously these vectors will be huge, but for simplicity I just put this limited size.
This is the operation without parallel computing:
function jt_transcoord(Xi, Yi, Zi, Xj, Yj, Zj, Rot, L)
r = Vector(length(Xi))
for i in 1:length(Xi)
rxX = (Xj[i] - Xi[i]) / L[i]
rxY = (Yj[i] - Yi[i]) / L[i]
rxZ = (Zj[i] - Zi[i]) / L[i]
if rxX == 0 && rxY == 0
r[i] = [0 0 rxZ; cosd(Rot[i]) -rxZ*sind(Rot[i]) 0; sind(Rot[i]) rxZ*cosd(Rot[i]) 0]
else
R=sqrt(rxX^2+rxY^2)
r21=(-rxX*rxZ*cosd(Rot[i])+rxY*sind(Rot[i]))/R
r22=(-rxY*rxZ*cosd(Rot[i])-rxX*sind(Rot[i]))/R
r23=R*cosd(Rot[i])
r31=(rxX*rxZ*sind(Rot[i])+rxY*cosd(Rot[i]))/R
r32=(rxY*rxZ*sind(Rot[i])-rxX*cosd(Rot[i]))/R
r33=-R*sind(Rot[i])
r[i] = [rxX rxY rxZ;r21 r22 r23;r31 r32 r33]
end
end
return r
end
The returned value is basically an array that contains a matrix in each vector row. That looks something like this:
r =
[[0.0 0.0 1.0; 0.0 -1.0 0.0; 1.0 0.0 0.0],
[0.0 0.0 1.0; 0.0 -1.0 0.0; 1.0 0.0 0.0],
[0.0 0.0 1.0; 0.0 -1.0 0.0; 1.0 0.0 0.0],
[0.0 0.0 1.0; 0.0 -1.0 0.0; 1.0 0.0 0.0],
[1.0 0.0 0.0; 0.0 -0.0 1.0; 0.0 -1.0 -0.0],
[0.0 1.0 0.0; 0.0 -0.0 1.0; 1.0 0.0 -0.0],
[-1.0 0.0 0.0; 0.0 0.0 1.0; 0.0 1.0 -0.0],
[0.0 -1.0 0.0; -0.0 0.0 1.0; -1.0 -0.0 -0.0]]
This is my function using #parallel. First of all I need to convert the vectors to SharedArrays:
Xi = convert(SharedArray, Xi)
Yi = convert(SharedArray, Yi)
Zi = convert(SharedArray, Zi)
Xj = convert(SharedArray, Xj)
Yj = convert(SharedArray, Yj)
Zj = convert(SharedArray, Zj)
L = convert(SharedArray, L)
Rot = convert(SharedArray, Rot)
This the same code but using #parallel
function jt_transcoord_parallel(Xi, Yi, Zi, Xj, Yj, Zj, Rot, L)
r = SharedArray{Float64}(zeros((length(Xi),1)))
#parallel for i in 1:length(Xi)
rxX = (Xj[i] - Xi[i]) / L[i]
rxY = (Yj[i] - Yi[i]) / L[i]
rxZ = (Zj[i] - Zi[i]) / L[i]
if rxX == 0 && rxY == 0
r[i] = [0 0 rxZ; cosd(Rot[i]) -rxZ*sind(Rot[i]) 0; sind(Rot[i]) rxZ*cosd(Rot[i]) 0]
else
R=sqrt(rxX^2+rxY^2)
r21=(-rxX*rxZ*cosd(Rot[i])+rxY*sind(Rot[i]))/R
r22=(-rxY*rxZ*cosd(Rot[i])-rxX*sind(Rot[i]))/R
r23=R*cosd(Rot[i])
r31=(rxX*rxZ*sind(Rot[i])+rxY*cosd(Rot[i]))/R
r32=(rxY*rxZ*sind(Rot[i])-rxX*cosd(Rot[i]))/R
r33=-R*sind(Rot[i])
r[i] = [rxX rxY rxZ;r21 r22 r23;r31 r32 r33]
end
end
return r
end
I just got a vector of zeros. My question is: Is there a way to implement this function using #parallel in Julia and get the same results that I got in my original function?
The functions jt_transcoord and jt_transcoord_parallel have major coding flaws.
In jt_transcoord, you are assigning an array to a vector element position. For example, you write r = Vector(length(Xi)) and then assign r[i] = [rxX rxY rxZ;r21 r22 r23;r31 r32 r33]. But r[i] should be a number, and you instead assign it a 3x3 matrix. I suspect that Julia is quietly changing types for you.
SharedArray objects will not admit this lax type conversion behavior. The components of a SharedArray must be of a single primitive type such as Float64, and Vector{Matrix} is not a primitive type. Open a Julia v0.6 REPL and copy/paste the following code:
r = SharedArray{Float64}(length(Xi))
for i in 1:length(Xi)
rxX = (Xj[i] - Xi[i]) / L[i]
rxY = (Yj[i] - Yi[i]) / L[i]
rxZ = (Zj[i] - Zi[i]) / L[i]
if rxX == 0 && rxY == 0
r[i] = [0 0 rxZ; cosd(Rot[i]) -rxZ*sind(Rot[i]) 0; sind(Rot[i]) rxZ*cosd(Rot[i]) 0]
else
R = sqrt(rxX^2+rxY^2)
r21 = (-rxX*rxZ*cosd(Rot[i])+rxY*sind(Rot[i]))/R
r22 = (-rxY*rxZ*cosd(Rot[i])-rxX*sind(Rot[i]))/R
r23 = R*cosd(Rot[i])
r31 = (rxX*rxZ*sind(Rot[i])+rxY*cosd(Rot[i]))/R
r32 = (rxY*rxZ*sind(Rot[i])-rxX*cosd(Rot[i]))/R
r33 = -R*sind(Rot[i])
r[i] = [rxX rxY rxZ;r21 r22 r23;r31 r32 r33]
end
end
On my end, I get:
ERROR: MethodError: Cannot `convert` an object of type Array{Float64,2} to an object of type Float64
This may have arisen from a call to the constructor Float64(...),
since type constructors fall back to convert methods.
Stacktrace:
[1] setindex!(::SharedArray{Float64,2}, ::Array{Float64,2}, ::Int64) at ./sharedarray.jl:483
[2] macro expansion at ./REPL[26]:6 [inlined]
[3] anonymous at ./<missing>:?
Essentially, Julia is telling you that it cannot assign a matrix to a SharedArray vector.
What are your options?
If you insist on having a Vector{Matrix} return type, then use r = Vector{Matrix{Float64}}(length(Xi)) in jt_transcoord. But you cannot use SharedArrays for this since Vector{Matrix} is not an admissible primitive type.
Alternatively, if you are willing to operate with tensors (i.e. 3-way arrays) then you can use pseudocode A below. But SharedArray computing will only help you if you carefully account for which process owns which portion of the tensor. Otherwise, the processes will need to communicate with each other, and your parallelized function could execute very slowly.
If you are willing to lay your 3x3 matrices in a 3n x 3 columnwise fashion, then you can use pseudocode B below.
Pseudocode A
function jt_transcoord_tensor(Xi, Yi, Zi, Xj, Yj, Zj, Rot, L)
# initialize array
r = Array{Float64}(3,3,length(Xi))
# r = SharedArray{Float64,3}((3,3,length(Xi))) # for SharedArrays
for i in 1:length(Xi)
# #parallel for i in 1:length(Xi) # for SharedArrays
# other code...
r[:,:,i] = [0 0 rxZ; cosd(Rot[i]) -rxZ*sind(Rot[i]) 0; sind(Rot[i]) rxZ*cosd(Rot[i]) 0]
# other code...
r[:,:,i] = [rxX rxY rxZ;r21 r22 r23;r31 r32 r33]
end
end
return r
end
Pseudocode B
function jt_transcoord_parallel(Xi, Yi, Zi, Xj, Yj, Zj, Rot, L)
n = length(Xi)
r = SharedArray{Float64}((3*n,3))
#parallel for i in 1:length(Xi)
# other code...
r[(3*(i-1)+1):3*(i),:] = [0 0 rxZ; cosd(Rot[i]) -rxZ*sind(Rot[i]) 0; sind(Rot[i]) rxZ*cosd(Rot[i]) 0]
# other code...
r[(3*(i-1)+1):3*(i),:] = [rxX rxY rxZ;r21 r22 r23;r31 r32 r33]
end
end
return r
end

Create Image from Text File RGB Data in Matlab

I have a text file with RGB data in the form of:
[Pixel 0,0] [Pixel 1,0] [Pixel 2,0]...
[Pixel 0,1] [Pixel 1,1] [Pixel 2,2]...
...
With an input of:
0.0 0.0 0.0 <-- this would be Pixel 0,0
1.0 0.0 0.0
1.0 0.9 0.0
I can create the flag of Germany in size 3x1 with:
%load the data to myData
Germany = reshape(myData,3,1,3);
image(Germany)
The 1px-wide pattern works good as show in picture, however, the goal is to be able to create multiple patterns, e.g. the Germany flag in 3x3 followed by Romania flag in 3x3 or any other pattern of any length and doing that! is where I can not find the proper way to reshape the matrix.
The input that should create the second example shown in picture is this:
|========= Germany Flag ==========| [ Blue ] [ Yellow ] [ Red ]
Black -> 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 0.9 0.0 1.0 0.0 0.0
Red -> 1.0 0.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 1.0 1.0 0.9 0.0 1.0 0.0 0.0
Yellow-> 1.0 0.9 0.0 1.0 0.9 0.0 1.0 0.9 0.0 0.0 0.0 1.0 1.0 0.9 0.0 1.0 0.0 0.0
Any help is appreciated
Update: Asked by Marcin, the input files are literal as I explained above.
This is the content of the GermanyRomania.txt file:
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 0.9 0.0 1.0 0.0 0.0
1.0 0.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 1.0 1.0 0.9 0.0 1.0 0.0 0.0
1.0 0.9 0.0 1.0 0.9 0.0 1.0 0.9 0.0 0.0 0.0 1.0 1.0 0.9 0.0 1.0 0.0 0.0
With that file I must create the 2nd pattern in picture (German+Romania Flag), there is ALL the RGB info required to do it.
I don't think you can achieve what you want by simply using the reshape function.
We must take into account that Matlab stores matrices in column-major order (you can read more about it here).
Therefore, before we can use the reshape function, we must have the data matrix in the following format:
[Pixel 0,0]
[Pixel 0,1]
...
[Pixel 1,0]
[Pixel 1,1]
...
[Pixel n,n]
Here's a possible solution:
# data stores the input
height = size(data, 1)
width = size(data, 2)
vertical_data_cell = mat2cell(data, height, 3 * ones(1, width / 3))'
vertical_data = cell2mat(vertical_data_cell)
flags = reshape(vertical_data, height, width / 3, 3)
image(flags)
Note that we make the matrix transformation on lines 4 and 5.
And here is the result for the input you provided:
It also works with different heights.
Here's the input for the flags of Germany, Argentina and Portugal.
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.46 0.66 0.85 0.46 0.66 0.85 0.46 0.66 0.85
1.0 0.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0 1.0 1.0 1.0 0.98 0.75 0.29 1.0 1.0 1.0
1.0 0.9 0.0 1.0 0.9 0.0 1.0 0.9 0.0 0.46 0.66 0.85 0.46 0.66 0.85 0.46 0.66 0.85
0.0 1.0 0.0 0.0 1.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0
0.0 1.0 0.0 1.0 0.9 0.0 1.0 0.9 0.0 1.0 0.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0
0.0 1.0 0.0 0.0 1.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0
And this is the result:

Resources