I am trying to think of / find an algorithm that will allow interleaving on an array size that isn't a power of two. The current method that I am using takes the array size, finds the square root (n) and creates an n x n matrix. The rows and columns are then exchanged and the matrix is then extended back to an array.
I'm trying to find some sort of indexing system that is flexible with any input array size but also allows for a decent distribution of data and allows for reconstruction of the original array. I provided some example code explicitly showing how the nxn interleaver is working.
import numpy as np
import math
N = 75
input_stream = np.random.random_integers(0,100,size=N)
#############
# Interleaver
#############
mat_dim = int(math.sqrt(len(input_stream)))
interleave_mat = np.zeros((mat_dim,mat_dim), dtype=np.int)
interleave_out = np.zeros(mat_dim**2, dtype=np.int)
for i in range(0,mat_dim):
for j in range(0,mat_dim):
interleave_mat[i][j] = input_stream[i*mat_dim + j]
for i in range(0,mat_dim):
for j in range(0,mat_dim):
interleave_out[i*mat_dim + j] = interleave_mat[j][i]
################
# De-Interleaver
################
deinterleave_mat = np.zeros((mat_dim,mat_dim), dtype=np.int)
deinterleave_out = np.zeros(mat_dim**2, dtype=np.int)
for i in range(0,mat_dim):
for j in range(0,mat_dim):
deinterleave_mat[i][j] = interleave_out[i*mat_dim + j]
for i in range(0,mat_dim):
for j in range(0,mat_dim):
deinterleave_out[i*mat_dim + j] = deinterleave_mat[j][i]
output_stream = deinterleave_out
error_count = sum(1 for a,b in zip(input_stream, output_stream) if a != b)
if len(input_stream) > len(output_stream):
error_count += len(input_stream) - len(output_stream)
print("Number of errors: {}").format(error_count)
print("input stream: {}").format(input_stream)
print("output stream: {}").format(output_stream)
Related
Finally I have a working code which detects the corner of the rectangles in an image.But the problem is the code is detecting multiple points at same corner. Now I am trying to introduce non-maximum suppression in my code but it was not working. I have tried one suggestion previous times but it is also not working. how to carry this non-maximum suppression properly.
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as im
from scipy import ndimage
# 1. Before doing any operations convert the image into gray scale image
img = im.imread('OD6.jpg')
plt.imshow(img)
plt.show()
# split
R=img[:,:,0]
G=img[:,:,1]
B=img[:,:,2]
M,N=R.shape
gray_img=np.zeros((M,N), dtype=int);
for i in range(M):
for j in range(N):
gray_img[i, j]=(R[i, j]*0.2989)+(G[i, j]*0.5870)+(B[i, j]*0.114);
plt.imshow(gray_img, cmap='gray')
plt.show()
# 2. Applying sobel filter to find the gradients in x and y direction respectively and remove noise
# using gaussian filter with sigma=1
imarr = np.asarray(gray_img, dtype=np.float64)
ix = ndimage.sobel(imarr, 0)
iy = ndimage.sobel(imarr, 1)
ix2 = ix * ix
iy2 = iy * iy
ixy = ix * iy
ix2 = ndimage.gaussian_filter(ix2, sigma=1)
iy2 = ndimage.gaussian_filter(iy2, sigma=1)
ixy = ndimage.gaussian_filter(ixy, sigma=1)
c, l = imarr.shape
result = np.zeros((c, l))
r = np.zeros((c, l))
rmax = 0 # initialize the maximum value of harris response
for i in range(c):
for j in range(l):
m = np.array([[ix2[i, j], ixy[i, j]], [ixy[i, j], iy2[i, j]]], dtype=np.float64)
r[i, j] = np.linalg.det(m) - 0.04 * (np.power(np.trace(m), 2))
if r[i, j] > rmax:
rmax = r[i, j]
# 3. Applying non maximum supression
for i in range(c - 1):
for j in range(l - 1):
if r[i, j] > 0.01 * rmax and r[i, j] > r[i-1, j-1] and r[i, j] > r[i-1, j+1]\
and r[i, j] > r[i+1, j-1] and r[i, j] > r[i+1, j+1]:
result[i, j] = 1
xy_coords = np.flip(np.column_stack(np.where(result==1)), axis=1)
print (xy_coords)
pc, pr = np.where(result == 1)
plt.plot(pr, pc, "b.")
plt.imshow(img, 'gray')
plt.show()
There are lots of materials available on Corner Detection. This is also solved in StackOverflow, please see here.
I have an image which consist of rectangle and square, Now I am interested in distinguish which one is rectangle and which one is square in that image. I have used Harris corner detection algorithm to extract the corner points. Using these corner point I am able to extract the index of these corner pixel. The next task I am interested is differentiate which one is rectangle and which one is square? I Know the conditions for square height=width. Using this information I wanted to execute the differentiation.
mport numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as im
from scipy import ndimage
# 1. Before doing any operations convert the image into gray scale image
img = im.imread('OD6.jpg')
plt.imshow(img)
plt.show()
# split
R=img[:,:,0]
G=img[:,:,1]
B=img[:,:,2]
M,N=R.shape
gray_img=np.zeros((M,N), dtype=int);
for i in range(M):
for j in range(N):
gray_img[i, j]=(R[i, j]*0.2989)+(G[i, j]*0.5870)+(B[i, j]*0.114);
plt.imshow(gray_img, cmap='gray')
plt.show()
# 2. Applying sobel filter to find the gradients in x and y direction respectively and remove noise
# using gaussian filter with sigma=1
imarr = np.asarray(gray_img, dtype=np.float64)
ix = ndimage.sobel(imarr, 0)
iy = ndimage.sobel(imarr, 1)
ix2 = ix * ix
iy2 = iy * iy
ixy = ix * iy
ix2 = ndimage.gaussian_filter(ix2, sigma=1)
iy2 = ndimage.gaussian_filter(iy2, sigma=1)
ixy = ndimage.gaussian_filter(ixy, sigma=1)
c, l = imarr.shape
result = np.zeros((c, l))
r = np.zeros((c, l))
rmax = 0 # initialize the maximum value of harris response
for i in range(c):
for j in range(l):
m = np.array([[ix2[i, j], ixy[i, j]], [ixy[i, j], iy2[i, j]]], dtype=np.float64)
r[i, j] = np.linalg.det(m) - 0.04 * (np.power(np.trace(m), 2))
if r[i, j] > rmax:
rmax = r[i, j]
# 3. Applying non maximum supression
for i in range(c - 1):
for j in range(l - 1):
if r[i, j] > 0.01 * rmax and r[i, j] > r[i-1, j-1] and r[i, j] > r[i-1, j+1]\
and r[i, j] > r[i+1, j-1] and r[i, j] > r[i+1, j+1]:
result[i, j] = 1
xy_coords = np.flip(np.column_stack(np.where(result==1)), axis=1)
print (xy_coords)
pc, pr = np.where(result == 1)
plt.plot(pr, pc, "b.")
plt.imshow(img, 'gray')
plt.show()
I'm running a kinetic Monte Carlo simulation code wherein I have a large sparse array of which I first calculate cumsum() and then find the first element greater than or equal to a given value using find().
vecIndex = find(cumsum(R) >= threshold, 1);
Since I'm calling the function a large number of times, I'd like to speed up my code. Is there a faster way to carry out this operation?
the complete function:
function Tr = select_transition(Fr,Rt,R)
N_dep = (1/(Rt+1))*Fr; %N flux-rate
Ga_dep = (1-(1/(Rt+1)))*Fr; %Ga flux-rate
Tr = zeros(4,1);
RVec = R(:, :, :, 3);
RVec = RVec(:);
sumR = Fr + sum(RVec); %Sum of the rates of all possible transitions
format long
sumRx = rand * sumR; %for randomly selecting one to the transitions
%disp(sumRx);
if sumRx <= Fr %adatom addition
Tr(1) = 0;
if sumRx <= Ga_dep
Tr(2) = 10; %Ga deposition
elseif sumRx > Ga_dep
Tr (2) = -10; %N deposition
end
else
Tr(1) = 1; %adatom hopping
vecIndex = find(cumsum(RVec) >= sumRx - Fr, 1);
[Tr(2), Tr(3), Tr(4)] = ind2sub(size(R(:, :, :, 3)), vecIndex); %determines specific hopping transition
end
end
If Rvec is sparse it is more efficient to extract its nonzero values and the corresponding indexes and apply cumsum on those values.
Tr(1) = 1;
[r,c,v] = find(RVec); % extract nonzeros
cum = cumsum(v);
f = find(cum >= sumRx - Fr, 1);
Tr(2) = r(f);
sz = size(R);
[Tr(3), Tr(4)] = ind2sub(sz(2:3), c(f));
I need to initialize a 3D tensor with an index-dependent function in torch7, i.e.
func = function(i,j,k) --i, j is the index of an element in the tensor
return i*j*k --do operations within func which're dependent of i, j
end
then I initialize a 3D tensor A like this:
for i=1,A:size(1) do
for j=1,A:size(2) do
for k=1,A:size(3) do
A[{i,j,k}] = func(i,j,k)
end
end
end
But this code runs very slow, and I found it takes up 92% of total running time. Are there any more efficient ways to initialize a 3D tensor in torch7?
See the documentation for the Tensor:apply
These functions apply a function to each element of the tensor on
which the method is called (self). These methods are much faster than
using a for loop in Lua.
The example in the docs initializes a 2D array based on its index i (in memory). Below is an extended example for 3 dimensions and below that one for N-D tensors. Using the apply method is much, much faster on my machine:
require 'torch'
A = torch.Tensor(100, 100, 1000)
B = torch.Tensor(100, 100, 1000)
function func(i,j,k)
return i*j*k
end
t = os.clock()
for i=1,A:size(1) do
for j=1,A:size(2) do
for k=1,A:size(3) do
A[{i, j, k}] = i * j * k
end
end
end
print("Original time:", os.difftime(os.clock(), t))
t = os.clock()
function forindices(A, func)
local i = 1
local j = 1
local k = 0
local d3 = A:size(3)
local d2 = A:size(2)
return function()
k = k + 1
if k > d3 then
k = 1
j = j + 1
if j > d2 then
j = 1
i = i + 1
end
end
return func(i, j, k)
end
end
B:apply(forindices(A, func))
print("Apply method:", os.difftime(os.clock(), t))
EDIT
This will work for any Tensor object:
function tabulate(A, f)
local idx = {}
local ndims = A:dim()
local dim = A:size()
idx[ndims] = 0
for i=1, (ndims - 1) do
idx[i] = 1
end
return A:apply(function()
for i=ndims, 0, -1 do
idx[i] = idx[i] + 1
if idx[i] <= dim[i] then
break
end
idx[i] = 1
end
return f(unpack(idx))
end)
end
-- usage for 3D case.
tabulate(A, function(i, j, k) return i * j * k end)
Here is a matalab program for backpropagation algorithm-
% XOR input for x1 and x2
input = [0 0; 0 1; 1 0; 1 1];
% Desired output of XOR
output = [0;1;1;0];
% Initialize the bias
bias = [-1 -1 -1];
% Learning coefficient
coeff = 0.7;
% Number of learning iterations
iterations = 10000;
% Calculate weights randomly using seed.
rand('state',sum(100.*clock));
weights = -1 +2.*rand(3,3);
for i = 1:iterations
out = zeros(4,1);
numIn = length (input(:,1));
for j = 1:numIn
% Hidden layer
H1 = bias(1,1).*weights(1,1) + input(j,1).*weights(1,2)+ input(j,2).*weights(1,3);
% Send data through sigmoid function 1/1+e^-x
% Note that sigma is a different m file
% that I created to run this operation
x2(1) = sigma(H1);
H2 = bias(1,2).*weights(2,1)+ input(j,1).*weights(2,2)+ input(j,2).*weights(2,3);
x2(2) = sigma(H2);
% Output layer
x3_1 = bias(1,3).*weights(3,1)+ x2(1).*weights(3,2)+ x2(2).*weights(3,3);
out(j) = sigma(x3_1);
% Adjust delta values of weights
% For output layer:
% delta(wi) = xi*delta,
% delta = (1-actual output)*(desired output - actual output)
delta3_1 = out(j).*(1-out(j)).*(output(j)-out(j));
% Propagate the delta backwards into hidden layers
delta2_1 = x2(1).*(1-x2(1)).*weights(3,2).*delta3_1;
delta2_2 = x2(2).*(1-x2(2)).*weights(3,3).*delta3_1;
% Add weight changes to original weights
% And use the new weights to repeat process.
% delta weight = coeff*x*delta
for k = 1:3
if k == 1 % Bias cases
weights(1,k) = weights(1,k) + coeff.*bias(1,1).*delta2_1;
weights(2,k) = weights(2,k) + coeff.*bias(1,2).*delta2_2;
weights(3,k) = weights(3,k) + coeff.*bias(1,3).*delta3_1;
else % When k=2 or 3 input cases to neurons
weights(1,k) = weights(1,k) + coeff.*input(j,1).*delta2_1;
weights(2,k) = weights(2,k) + coeff.*input(j,2).*delta2_2;
weights(3,k) = weights(3,k) + coeff.*x2(k-1).*delta3_1;
end
end
end
end
But its showing error like -
??? Index exceeds matrix dimensions.
Error in ==> sigma at 95
a=varargin{1}; b=varargin{2}; c=varargin{3}; d=varargin{4};
Error in ==> back at 25
x2(1) = sigma(H1);
Please help me out. I am not able to understand the problem. Why there is an error saying index exceeds matrix dimension? Help is needed.