Related
Regarding to this post on StackOverflow (and this too) I'm taking one normal image of a flower, then a white image and then I apply the SoftLight.
These are the images (flower and white image):
The result should be something similar of what I've got in GIMP:
but it's finally a white image.
I modified the code in order to put it inside a function, and this is my code:
// function
uint convSoftLight(int A, int B) {
return ((uint)((B < 128)?(2*((A>>1)+64))*((float)B/255):(255-(2*(255-((A>>1)+64))*(float)(255-B)/255))));
}
void function() {
Mat flower = imread("/Users/rafaelruizmunoz/Desktop/flower.jpg");
Mat white_flower = Mat::zeros(Size(flower.cols, flower.rows), flower.type());
Mat mix = Mat::zeros(Size(flower.cols, flower.rows), flower.type());
for (int i = 0; i < white_flower.rows; i++) {
for (int j = 0; j < white_flower.cols; j++) {
white_flower.at<Vec3b>(i,j) = Vec3b(255,255,255);
}
}
imshow("flower", flower);
imshow("mask_white", white_flower);
for (int i = 0; i < mix.rows; i++) {
for (int j = 0; j < mix.cols; j++) {
Vec3b vec = flower.at<Vec3b>(i,j);
vec[0] = convSoftLight(vec[0], 255); // 255 or just the white_flower pixel at (i,j)
vec[1] = convSoftLight(vec[1], 255); // 255 or just the white_flower pixel at (i,j)
vec[2] = convSoftLight(vec[2], 255); // 255 or just the white_flower pixel at (i,j)
mix.at<Vec3b>(i,j) = vec;
}
}
imshow("mix", mix);
}
What am I doing wrong?
Thank you.
EDIT: I've tried to flip the order (convSoftLight(B,A); instead convSoftLight(A,B)), but nothing happened (black image)
Based on the blender definitions: I rewrote my function:
uint convSoftLight(int A, int B) {
float a = (float)A / 255;
float b = (float)B / 255;
float result = 0;
if (b < 0.5)
result = 2 * a * b + pow(a,2) * (1 - 2*b);
else
result = 2 * a * (1-b) + sqrt(a) * (2*b - 1);
return (uint)255* result;
}
Here's how soft light might be implemented in Python (with OpenCV and NumPy):
import numpy as np
def applySoftLight(bottom, top, mask):
""" Apply soft light blending
"""
assert all(image.dtype == np.float32 for image in [bottom, top, mask])
blend = np.zeros(bottom.shape, dtype=np.float32)
low = np.where((top < 0.5) & (mask > 0))
blend[low] = 2 * bottom[low] * top[low] + bottom[low] * bottom[low] * (1 - 2 * top[low])
high = np.where((top >= 0.5) & (mask > 0))
blend[high] = 2 * bottom[high] * (1 - top[high]) + np.sqrt(bottom[high]) * (2 * top[high] - 1)
# alpha blending accroding to mask
result = bottom * (1 - mask) + blend * mask
return result
All matrices must be single channel 2D matrices converted into type np.float32. Mask is a "layer mask" in terms of GIMP/Photoshop.
I'm trying to implement the method of improving fingerprint images by Anil Jain. As a starter, I encountered some difficulties while extracting the orientation image, and am strictly following those steps described in Section 2.4 of that paper.
So, this is the input image:
And this is after normalization using exactly the same method as in that paper:
I'm expecting to see something like this (an example from the internet):
However, this is what I got for displaying obtained orientation matrix:
Obviously this is wrong, and it also gives non-zero values for those zero points in the original input image.
This is the code I wrote:
cv::Mat orientation(cv::Mat inputImage)
{
cv::Mat orientationMat = cv::Mat::zeros(inputImage.size(), CV_8UC1);
// compute gradients at each pixel
cv::Mat grad_x, grad_y;
cv::Sobel(inputImage, grad_x, CV_16SC1, 1, 0, 3, 1, 0, cv::BORDER_DEFAULT);
cv::Sobel(inputImage, grad_y, CV_16SC1, 0, 1, 3, 1, 0, cv::BORDER_DEFAULT);
cv::Mat Vx, Vy, theta, lowPassX, lowPassY;
cv::Mat lowPassX2, lowPassY2;
Vx = cv::Mat::zeros(inputImage.size(), inputImage.type());
Vx.copyTo(Vy);
Vx.copyTo(theta);
Vx.copyTo(lowPassX);
Vx.copyTo(lowPassY);
Vx.copyTo(lowPassX2);
Vx.copyTo(lowPassY2);
// estimate the local orientation of each block
int blockSize = 16;
for(int i = blockSize/2; i < inputImage.rows - blockSize/2; i+=blockSize)
{
for(int j = blockSize / 2; j < inputImage.cols - blockSize/2; j+= blockSize)
{
float sum1 = 0.0;
float sum2 = 0.0;
for ( int u = i - blockSize/2; u < i + blockSize/2; u++)
{
for( int v = j - blockSize/2; v < j+blockSize/2; v++)
{
sum1 += grad_x.at<float>(u,v) * grad_y.at<float>(u,v);
sum2 += (grad_x.at<float>(u,v)*grad_x.at<float>(u,v)) * (grad_y.at<float>(u,v)*grad_y.at<float>(u,v));
}
}
Vx.at<float>(i,j) = sum1;
Vy.at<float>(i,j) = sum2;
double calc = 0.0;
if(sum1 != 0 && sum2 != 0)
{
calc = 0.5 * atan(Vy.at<float>(i,j) / Vx.at<float>(i,j));
}
theta.at<float>(i,j) = calc;
// Perform low-pass filtering
float angle = 2 * calc;
lowPassX.at<float>(i,j) = cos(angle * pi / 180);
lowPassY.at<float>(i,j) = sin(angle * pi / 180);
float sum3 = 0.0;
float sum4 = 0.0;
for(int u = -lowPassSize / 2; u < lowPassSize / 2; u++)
{
for(int v = -lowPassSize / 2; v < lowPassSize / 2; v++)
{
sum3 += inputImage.at<float>(u,v) * lowPassX.at<float>(i - u*lowPassSize, j - v * lowPassSize);
sum4 += inputImage.at<float>(u, v) * lowPassY.at<float>(i - u*lowPassSize, j - v * lowPassSize);
}
}
lowPassX2.at<float>(i,j) = sum3;
lowPassY2.at<float>(i,j) = sum4;
float calc2 = 0.0;
if(sum3 != 0 && sum4 != 0)
{
calc2 = 0.5 * atan(lowPassY2.at<float>(i, j) / lowPassX2.at<float>(i, j)) * 180 / pi;
}
orientationMat.at<float>(i,j) = calc2;
}
}
return orientationMat;
}
I've already searched a lot on the web, but almost all of them are in Matlab. And there exist very few ones using OpenCV, but they didn't help me either. I sincerely hope someone could go through my code and point out any error to help. Thank you in advance.
Update
Here are the steps that I followed according to the paper:
Obtain normalized image G.
Divide G into blocks of size wxw (16x16).
Compute the x and y gradients at each pixel (i,j).
Estimate the local orientation of each block centered at pixel (i,j) using equations:
Perform low-pass filtering to remove noise. For that, convert the orientation image into a continuous vector field defined as:
where W is a two-dimensional low-pass filter, and w(phi) x w(phi) is its size, which equals to 5.
Finally, compute the local ridge orientation at (i,j) using:
Update2
This is the output of orientationMat after changing the mat type to CV_16SC1 in Sobel operation as Micka suggested:
Maybe it's too late for me to answer, but anyway somebody could read this later and solve the same problem.
I've been working for a while in the same algorithm, same method you posted... But there's some writting errors when the papper was redacted (I guess). After fighting a lot with the equations I found this errors by looking other similar works.
Here is what worked for me...
Vy(i, j) = 2*dx(u,v)*dy(u,v)
Vx(i,j) = dx(u,v)^2 - dy(u,v)^2
O(i,j) = 0.5*arctan(Vy(i,j)/Vx(i,j)
(Excuse me I wasn't able to post images, so I wrote the modified ecuations. Remeber "u" and "v" are positions of the summation across the BlockSize by BlockSize window)
The first thing and most important (obviously) are the equations, I saw that in different works this expressions were really different and in every one they talked about the same algorithm of Hong et al.
The Key is finding the Least Mean Square (First 3 equations) of the gradients (Vx and Vy), I provided the corrected formulas above for this ation. Then you can compute angle theta for the non overlapping window (16x16 size recommended in the papper), after that the algorithm says you must calculate the magnitud of the doubled angle in "x" and "y" directions (Phi_x and Phi_y).
Phi_x(i,j) = V(i,j) * cos(2*O(i,j))
Phi_y(i,j) = V(i,j) * sin(2*O(i,j))
Magnitud is just:
V = sqrt(Vx(i,j)^2 + Vy(i,j)^2)
Note that in the related work doesn't mention that you have to use the gradient magnitud, but it make sense (for me) in doing it. After all this corrections you can apply the low pass filter to Phi_x and Phi_y, I used a simple Mask of size 5x5 to average this magnitudes (something like medianblur() of opencv).
Last thing is to calculate new angle, that is the average of the 25ith neighbors in the O(i,j) image, for this you just have to:
O'(i,j) = 0.5*arctan(Phi_y/Phi_x)
We're just there... All this just for calculating the angle of the NORMAL VECTOR TO THE RIDGES DIRECTIONS (O'(i,j)) in the BlockSize by BlockSize non overlapping window, what does it mean? it means that the angle we just calculated is perpendicular to the ridges, in simple words we just calculated the angle of the riges plus 90 degrees... To get the angle we need, we just have to substract to the obtained angle 90°.
To draw the lines we need to have an initial point (X0, Y0) and a final point(X1, Y1). For that imagine a circle centered on (X0, Y0) with a radious of "r":
x0 = i + blocksize/2
y0 = j + blocksize/2
r = blocksize/2
Note we add i and j to the first coordinates becouse the window is moving and we are gonna draw the line starting from the center of the non overlaping window, so we can't use just the center of the non overlaping window.
Then to calculate the end coordinates to draw a line we can just have to use a right triangle so...
X1 = r*cos(O'(i,j)-90°)+X0
Y1 = r*sin(O'(i,j)-90°)+Y0
X2 = X0-r*cos(O'(i,j)-90°)
Y2 = Y0-r*cos(O'(i,j)-90°)
Then just use opencv line function, where initial Point is (X0,Y0) and final Point is (X1, Y1). Additional to it, I drawed the windows of 16x16 and computed the oposite points of X1 and Y1 (X2 and Y2) to draw a line of the entire window.
Hope this help somebody.
My results...
Main function:
Mat mat = imread("nwmPa.png",0);
mat.convertTo(mat, CV_32F, 1.0/255, 0);
Normalize(mat);
int blockSize = 6;
int height = mat.rows;
int width = mat.cols;
Mat orientationMap;
orientation(mat, orientationMap, blockSize);
Normalize:
void Normalize(Mat & image)
{
Scalar mean, dev;
meanStdDev(image, mean, dev);
double M = mean.val[0];
double D = dev.val[0];
for(int i(0) ; i<image.rows ; i++)
{
for(int j(0) ; j<image.cols ; j++)
{
if(image.at<float>(i,j) > M)
image.at<float>(i,j) = 100.0/255 + sqrt( 100.0/255*pow(image.at<float>(i,j)-M,2)/D );
else
image.at<float>(i,j) = 100.0/255 - sqrt( 100.0/255*pow(image.at<float>(i,j)-M,2)/D );
}
}
}
Orientation map:
void orientation(const Mat &inputImage, Mat &orientationMap, int blockSize)
{
Mat fprintWithDirectionsSmoo = inputImage.clone();
Mat tmp(inputImage.size(), inputImage.type());
Mat coherence(inputImage.size(), inputImage.type());
orientationMap = tmp.clone();
//Gradiants x and y
Mat grad_x, grad_y;
// Sobel(inputImage, grad_x, CV_32F, 1, 0, 3, 1, 0, BORDER_DEFAULT);
// Sobel(inputImage, grad_y, CV_32F, 0, 1, 3, 1, 0, BORDER_DEFAULT);
Scharr(inputImage, grad_x, CV_32F, 1, 0, 1, 0);
Scharr(inputImage, grad_y, CV_32F, 0, 1, 1, 0);
//Vector vield
Mat Fx(inputImage.size(), inputImage.type()),
Fy(inputImage.size(), inputImage.type()),
Fx_gauss,
Fy_gauss;
Mat smoothed(inputImage.size(), inputImage.type());
// Local orientation for each block
int width = inputImage.cols;
int height = inputImage.rows;
int blockH;
int blockW;
//select block
for(int i = 0; i < height; i+=blockSize)
{
for(int j = 0; j < width; j+=blockSize)
{
float Gsx = 0.0;
float Gsy = 0.0;
float Gxx = 0.0;
float Gyy = 0.0;
//for check bounds of img
blockH = ((height-i)<blockSize)?(height-i):blockSize;
blockW = ((width-j)<blockSize)?(width-j):blockSize;
//average at block WхW
for ( int u = i ; u < i + blockH; u++)
{
for( int v = j ; v < j + blockW ; v++)
{
Gsx += (grad_x.at<float>(u,v)*grad_x.at<float>(u,v)) - (grad_y.at<float>(u,v)*grad_y.at<float>(u,v));
Gsy += 2*grad_x.at<float>(u,v) * grad_y.at<float>(u,v);
Gxx += grad_x.at<float>(u,v)*grad_x.at<float>(u,v);
Gyy += grad_y.at<float>(u,v)*grad_y.at<float>(u,v);
}
}
float coh = sqrt(pow(Gsx,2) + pow(Gsy,2)) / (Gxx + Gyy);
//smoothed
float fi = 0.5*fastAtan2(Gsy, Gsx)*CV_PI/180;
Fx.at<float>(i,j) = cos(2*fi);
Fy.at<float>(i,j) = sin(2*fi);
//fill blocks
for ( int u = i ; u < i + blockH; u++)
{
for( int v = j ; v < j + blockW ; v++)
{
orientationMap.at<float>(u,v) = fi;
Fx.at<float>(u,v) = Fx.at<float>(i,j);
Fy.at<float>(u,v) = Fy.at<float>(i,j);
coherence.at<float>(u,v) = (coh<0.85)?1:0;
}
}
}
} ///for
GaussConvolveWithStep(Fx, Fx_gauss, 5, blockSize);
GaussConvolveWithStep(Fy, Fy_gauss, 5, blockSize);
for(int m = 0; m < height; m++)
{
for(int n = 0; n < width; n++)
{
smoothed.at<float>(m,n) = 0.5*fastAtan2(Fy_gauss.at<float>(m,n), Fx_gauss.at<float>(m,n))*CV_PI/180;
if((m%blockSize)==0 && (n%blockSize)==0){
int x = n;
int y = m;
int ln = sqrt(2*pow(blockSize,2))/2;
float dx = ln*cos( smoothed.at<float>(m,n) - CV_PI/2);
float dy = ln*sin( smoothed.at<float>(m,n) - CV_PI/2);
arrowedLine(fprintWithDirectionsSmoo, Point(x, y+blockH), Point(x + dx, y + blockW + dy), Scalar::all(255), 1, CV_AA, 0, 0.06*blockSize);
// qDebug () << Fx_gauss.at<float>(m,n) << Fy_gauss.at<float>(m,n) << smoothed.at<float>(m,n);
// imshow("Orientation", fprintWithDirectionsSmoo);
// waitKey(0);
}
}
}///for2
normalize(orientationMap, orientationMap,0,1,NORM_MINMAX);
imshow("Orientation field", orientationMap);
orientationMap = smoothed.clone();
normalize(smoothed, smoothed, 0, 1, NORM_MINMAX);
imshow("Smoothed orientation field", smoothed);
imshow("Coherence", coherence);
imshow("Orientation", fprintWithDirectionsSmoo);
}
seems nothing forgot )
I have read your code thoroughly and found that you have made a mistake while calculating sum3 and sum4:
sum3 += inputImage.at<float>(u,v) * lowPassX.at<float>(i - u*lowPassSize, j - v * lowPassSize);
sum4 += inputImage.at<float>(u, v) * lowPassY.at<float>(i - u*lowPassSize, j - v * lowPassSize);
instead of inputImage you should use a low pass filter.
I would like to use a Butterworth filter on a 1D-Signal. In Matlab the script would look like this:
f=100;
f_cutoff = 20;
fnorm =f_cutoff/(f/2);
[b,a] = butter(8,fnorm,'low');
filteredData = filter(b,a,rawData); % I want to write this myself
Now I don't want to directly use the filter-function given in Matlab but write it myself.
In the Matlab documentation it's described as follows:
The filter function is implemented as a direct form II transposed structure,
y(n) = b(1)*x(n) + b(2)*x(n-1) + ... + b(nb+1)*x(n-nb)
- a(2)*y(n-1) - ... - a(na+1)*y(n-na)
where n-1 is the filter order, which handles both FIR and IIR filters [1], na is the feedback filter order, and nb is the feedforward filter order.
So I've already tried to write the function like that:
f=100;
f_cutoff = 20;
fnorm =f_cutoff/(f/2);
[b,a] = butter(8,fnorm,'low');
for n = 9:size(rawData,1)
filteredData(n,1) = b(1)*n + b(2)*(n-1) + b(3)*(n-2) + b(4)*(n-3) + b(5)*(n-4) ...
- a(2)*rawData(n-1,1) - a(3)*rawData(n-2,1) - a(4)*rawData(n-3,1) - a(5)*accel(n-4,1);
end
But that's not working. Can you please help me? What am I doing wrong?
Sincerely,
Cerdo
PS: the filter documentation can be foud here: http://www.mathworks.de/de/help/matlab/ref/filter.html#f83-1015962 when expanding More About -> Algorithms
Check my Answer
filter
public static double[] filter(double[] b, double[] a, double[] x) {
double[] filter = null;
double[] a1 = getRealArrayScalarDiv(a,a[0]);
double[] b1 = getRealArrayScalarDiv(b,a[0]);
int sx = x.length;
filter = new double[sx];
filter[0] = b1[0]*x[0];
for (int i = 1; i < sx; i++) {
filter[i] = 0.0;
for (int j = 0; j <= i; j++) {
int k = i-j;
if (j > 0) {
if ((k < b1.length) && (j < x.length)) {
filter[i] += b1[k]*x[j];
}
if ((k < filter.length) && (j < a1.length)) {
filter[i] -= a1[j]*filter[k];
}
} else {
if ((k < b1.length) && (j < x.length)) {
filter[i] += (b1[k]*x[j]);
}
}
}
}
return filter;
}
conv
public static double[] conv(double[] a, double[] b) {
double[] c = null;
int na = a.length;
int nb = b.length;
if (na > nb) {
if (nb > 1) {
c = new double[na+nb-1];
for (int i = 0; i < c.length; i++) {
if (i < a.length) {
c[i] = a[i];
} else {
c[i] = 0.0;
}
}
a = c;
}
c = filter(b, new double [] {1.0} , a);
} else {
if (na > 1) {
c = new double[na+nb-1];
for (int i = 0; i < c.length; i++) {
if (i < b.length) {
c[i] = b[i];
} else {
c[i] = 0.0;
}
}
b = c;
}
c = filter(a, new double [] {1.0}, b);
}
return c;
}
deconv
public static double[] deconv(double[] b, double[] a) {
double[] q = null;
int sb = b.length;
int sa = a.length;
if (sa > sb) {
return q;
}
double[] zeros = new double[sb - sa +1];
for (int i =1; i < zeros.length; i++){
zeros[i] = 0.0;
}
zeros[0] = 1.0;
q = filter(b,a,zeros);
return q;
}
deconvRes
public static double[] deconvRes(double[] b, double[] a) {
double[] r = null;
r = getRealArraySub(b,conv(a,deconv(b,a)));
return r;
}
getRealArraySub
public static double[] getRealArraySub(double[] dSub0, double[] dSub1) {
double[] dSub = null;
if ((dSub0 == null) || (dSub1 == null)) {
throw new IllegalArgumentException("The array must be defined or diferent to null");
}
if (dSub0.length != dSub1.length) {
throw new IllegalArgumentException("Arrays must be the same size");
}
dSub = new double[dSub1.length];
for (int i = 0; i < dSub.length; i++) {
dSub[i] = dSub0[i] - dSub1[i];
}
return dSub;
}
getRealArrayScalarDiv
public static double[] getRealArrayScalarDiv(double[] dDividend, double dDivisor) {
if (dDividend == null) {
throw new IllegalArgumentException("The array must be defined or diferent to null");
}
if (dDividend.length == 0) {
throw new IllegalArgumentException("The size array must be greater than Zero");
}
double[] dQuotient = new double[dDividend.length];
for (int i = 0; i < dDividend.length; i++) {
if (!(dDivisor == 0.0)) {
dQuotient[i] = dDividend[i]/dDivisor;
} else {
if (dDividend[i] > 0.0) {
dQuotient[i] = Double.POSITIVE_INFINITY;
}
if (dDividend[i] == 0.0) {
dQuotient[i] = Double.NaN;
}
if (dDividend[i] < 0.0) {
dQuotient[i] = Double.NEGATIVE_INFINITY;
}
}
}
return dQuotient;
}
Example Using
Example Using
double[] a, b, q, u, v, w, r, z, input, outputVector;
u = new double [] {1,1,1};
v = new double [] {1, 1, 0, 0, 0, 1, 1};
w = conv(u,v);
System.out.println("w=\n"+Arrays.toString(w));
a = new double [] {1, 2, 3, 4};
b = new double [] {10, 40, 100, 160, 170, 120};
q = deconv(b,a);
System.out.println("q=\n"+Arrays.toString(q));
r = deconvRes(b,a);
System.out.println("r=\n"+Arrays.toString(r));
a = new double [] {2, -2.5, 1};
b = new double [] {0.1, 0.1};
u = new double[31];
for (int i = 1; i < u.length; i++) {
u[i] = 0.0;
}
u[0] = 1.0;
z = filter(b, a, u);
System.out.println("z=\n"+Arrays.toString(z));
a = new double [] {1.0000,-3.518576748255174,4.687508888099475,-2.809828793526308,0.641351538057564};
b = new double [] { 0.020083365564211,0,-0.040166731128422,0,0.020083365564211};
input = new double[]{1,2,3,4,5,6,7,8,9};
outputVector = filter(b, a, input);
System.out.println("outputVector=\n"+Arrays.toString(outputVector));
OUTPUT
w=
[1.0, 2.0, 2.0, 1.0, 0.0, 1.0, 2.0, 2.0, 1.0]
q=
[10.0, 20.0, 30.0]
r=
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
z=
[0.05, 0.1125, 0.115625, 0.08828125, 0.0525390625, 0.021533203124999997, 6.469726562499979E-4, -0.009957885742187502, -0.012770843505859377, -0.010984611511230471, -0.007345342636108401, -0.003689372539520266, -9.390443563461318E-4, 6.708808243274683E-4, 0.0013081232085824014, 0.0012997135985642675, 9.705803939141337E-4, 5.633686931105333E-4, 2.189206694310998E-4, -8.033509766391922E-6, -1.195022219235398E-4, -1.453610225212288E-4, -1.219501671897661E-4, -7.975719772659323E-5, -3.8721413563358476E-5, -8.523168090901481E-6, 8.706746668052387E-6, 1.5145017380516224E-5, 1.4577898391619086E-5, 1.0649864299265747E-5, 6.023381178272641E-6]
outputVector=
[0.020083365564211, 0.11083159422936348, 0.31591188140651166, 0.648466936215357, 1.0993782391344866, 1.6451284697769106, 2.25463601232057, 2.8947248889603028, 3.534126758562552]
Please give me your feedbacks!!
I have found a text described the Direct Form II Transposed used in the Matlab filter function and it works perfectly. See script below. Other implementations are also available but with error of around 1e-15, you'll see this by running the script yourself.
%% Specification of the Linear Chebysev filters
clc;clear all;close all
ord = 5; %System order (from 1 to 5)
[bq,aq] = cheby1(ord,2,0.2);theta = [bq aq(2:end)]';
figure;zplane(bq,aq); % Z-Pole/Zeros
u = [ones(40,1); zeros(40,1)];
%% Naive implementation of the basic algorithm
y0 = filter(bq,aq,u); % Built-in filter
b = fliplr(bq);a = fliplr(aq);a(end) = [];
y1 = zeros(40,1);pad = zeros (ord,1);
yp = [pad; y1(:)];up = [pad; u(:)];
for i = 1:length(u)
yp(i+ord) = sum(b(:).*up(i:i+ord))-sum(a(:).*yp(i:i+ord-1));
end
y1 = yp(ord+1:end); % Naive implementation
err = y0(:)-y1(:);
figure
plot(y0,'r')
hold on
plot(y1,'*g')
xlabel('Time')
ylabel('Response')
legend('My code','Built-in filter')
figure
plot(err)
xlabel('Time')
ylabel('Error')
%% Direct Form II Transposed
% Direct realization of rational transfer functions
% trps: 0 for direct realization, 1 for transposed realisation
% b,a: Numerator and denominator
% x: Input sequence
% y: Output sequence
% u: Internal states buffer
trps = 1;
b=theta(1:ord+1);
a=theta(ord+2:end);
y2=zeros(size(u));
x=zeros(ord,1);
%%
if trps==1
for i=1:length(u)
y2(i)=b(1)*u(i)+x(1);
x=[x(2:ord);0];
x=x+b(2:end)*u(i)-a*y2(i);
end
else
for i=1:length(u)
xnew=u(i)-sum(x(1:ord).*a);
x=[xnew,x];
y2(i)=sum(x(1:ord+1).*b);
x=x(1:ord);
end
end
%%
err = y2 - filter(bq,aq,u);
figure
plot(y0,'r')
hold on
plot(y2,'*g')
xlabel('Time')
ylabel('Response')
legend('Form II Transposed','Built-in filter')
figure
plot(err)
xlabel('Time')
ylabel('Error')
% end
I implemented filter function used by Matlab in Java :
The filter function is implemented as a direct form II transposed
structure,
y(n) = b(1)*x(n) + b(2)*x(n-1) + ... + b(nb+1)*x(n-nb) - a(2)*y(n-1) -
... - a(na+1)*y(n-na)
where n-1 is the filter order, which handles both FIR and IIR filters
[1], na is the feedback filter order, and nb is the feedforward filter
order.
public void filter(double [] b,double [] a, ArrayList<Double> inputVector,ArrayList<Double> outputVector){
double rOutputY = 0.0;
int j = 0;
for (int i = 0; i < inputVector.size(); i++) {
if(j < b.length){
rOutputY += b[j]*inputVector.get(inputVector.size() - i - 1);
}
j++;
}
j = 1;
for (int i = 0; i < outputVector.size(); i++) {
if(j < a.length){
rOutputY -= a[j]*outputVector.get(outputVector.size() - i - 1);
}
j++;
}
outputVector.add(rOutputY);
}
and Here is an example :
ArrayList<Double>inputVector = new ArrayList<Double>();
ArrayList<Double>outputVector = new ArrayList<Double>();
double [] a = new double [] {1.0000,-3.518576748255174,4.687508888099475,-2.809828793526308,0.641351538057564};
double [] b = new double [] { 0.020083365564211,0,-0.040166731128422,0,0.020083365564211};
double []input = new double[]{1,2,3,4,5,6,7,8,9};
for (int i = 0; i < input.length; i++) {
inputVector.add(input[i]);
filter(b, a, inputVector, outputVector);
}
System.out.println(outputVector);
and output was :
[0.020083365564211, 0.11083159422936348, 0.31591188140651166, 0.6484669362153569, 1.099378239134486, 1.6451284697769086, 2.254636012320566, 2.894724888960297, 3.534126758562545]
as in Matlab output
That's it
I found my mistake. Here's the working code (as a function):
function filtered = myFilter(b, a, raw)
filtered = zeros(size(raw));
for c = 1:3
for n = 9:size(raw,1)
filtered(n,c) = b(1)* raw(n,c) + b(2)* raw(n-1,c) + b(3)* raw(n-2,c) ...
+ b(4)* raw(n-3,c) + b(5)* raw(n-4,c) + b(6)* raw(n-5,c) ...
+ b(7)* raw(n-6,c) + b(8)* raw(n-7,c) + b(9)* raw(n-8,c) ...
- a(1)*filtered(n,c) - a(2)*filtered(n-1,c) - a(3)*filtered(n-2,c) ...
- a(4)*filtered(n-3,c) - a(5)*filtered(n-4,c) - a(6)*filtered(n-5,c) ...
- a(7)*filtered(n-6,c) - a(8)*filtered(n-7,c) - a(9)*filtered(n-8,c);
end
end
Now the filter works nearly fine, but at the first 40 values i've got divergent results. I'll have to figure that out...
BlackEagle's solution does not reproduce the same results as MATLAB with other arrays. For example:
b = [0.1 0.1]
a = [2 -2.5 1]
u = [1, zeros(1, 30)];
z = filter(b, a, u)
Gives you completely other results. Be careful.
For windows phone app, when I am adjusting brightness by slider it works fine when I
move it to right. But when I go back to previous position, instead of image darkening, it goes brighter and brighter. Here is my code based on pixel manipulation.
private void slider1_ValueChanged(object sender, RoutedPropertyChangedEventArgs<double> e)
{
wrBmp = new WriteableBitmap(Image1, null);
for (int i = 0; i < wrBmp.Pixels.Count(); i++)
{
int pixel = wrBmp.Pixels[i];
int B = (int)(pixel & 0xFF); pixel >>= 8;
int G = (int)(pixel & 0xFF); pixel >>= 8;
int R = (int)(pixel & 0xFF); pixel >>= 8;
int A = (int)(pixel);
B += (int)slider1.Value; R += (int)slider1.Value; G += (int)slider1.Value;
if (R > 255) R = 255; if (G > 255) G = 255; if (B > 255) B = 255;
if (R < 0) R = 0; if (G < 0) G = 0; if (B < 0) B = 0;
wrBmp.Pixels[i] = B | (G << 8) | (R << 16) | (A << 24);
}
wrBmp.Invalidate();
Image1.Source = wrBmp;
}
What am I missing and is there any problem with slider value. I am working with small images as usual in mobiles. I have already tried copying original image to duplicate one. I think code is perfect, after a lot of research I found that the problem is due to slider value.Possible solution is assigning initial value to slider. I want some code help.
private double lastSlider3Vlaue;
private void slider3_ValueChanged(object sender,`RoutedPropertyChangedEventArgs e)
{
if (slider3 == null) return;
double[] contrastArray = { 1, 1.2, 1.3, 1.6, 1.7, 1.9, 2.1, 2.4, 2.6, 2.9 };
double CFactor = 0;
int nIndex = 0;
nIndex = (int)slider3.Value - (int)lastSlider3Vlaue;
if (nIndex < 0)
{
nIndex = (int)lastSlider3Vlaue - (int)slider3.Value;
this.lastSlider3Vlaue = slider3.Value;
CFactor = contrastArray[nIndex];
}
else
{
nIndex = (int)slider3.Value - (int)lastSlider3Vlaue;
this.lastSlider3Vlaue = slider3.Value;
CFactor = contrastArray[nIndex];
}
WriteableBitmap wbOriginal;
wbOriginal = new WriteableBitmap(Image1, null);
wrBmp = new WriteableBitmap(wbOriginal.PixelWidth, wbOriginal.PixelHeight);
wbOriginal.Pixels.CopyTo(wrBmp.Pixels, 0);
int h = wrBmp.PixelHeight;
int w = wrBmp.PixelWidth;
for (int i = 0; i < wrBmp.Pixels.Count(); i++)
{
int pixel = wrBmp.Pixels[i];
int B = (int)(pixel & 0xFF); pixel >>= 8;
int G = (int)(pixel & 0xFF); pixel >>= 8;
int R = (int)(pixel & 0xFF); pixel >>= 8;
int A = (int)(pixel);
R = (int)(((R - 128) * CFactor) + 128);
G = (int)(((G - 128) * CFactor) + 128);
B = (int)(((B - 128) * CFactor) + 128);
if (R > 255) R = 255; if (G > 255) G = 255; if (B > 255) B = 255;
if (R < 0) R = 0; if (G < 0) G = 0; if (B < 0) B = 0;
wrBmp.Pixels[i] = B | (G << 8) | (R << 16) | (A << 24);
}
wrBmp.Invalidate();
Image1.Source = wrBmp;
}
After debugging I found that the r g b values are decreasing continuosly when sliding forward,but when sliding backwards it is also decreasing where as it shoul increase.
please help iam working on this since past last three months.Besides this u also give me advice about how i can complete this whole image processing
Your algorithm is wrong. Each time the slider's value changes, you're adding that value to the picture's brightness. What makes your logic flawed is that the value returned by the slider will always be positive, and you're always adding the brightness to the same picture.
So, if the slider starts with a value of 10, I'll add 10 to the picture's brightness.
Then, I slide to 5. I'll add 5 to the previous picture's brightness (the one you already added 10 of brightness to).
Two ways to solve the issue:
Keep a copy of the original picture, and duplicate it every time your method is called. Then add the brightness to the copy (and not the original). That's the safest way.
Instead of adding the new absolute value of the slider, calculate the relative value (how much it changed since the last time the method was called:
private double lastSliderValue;
private void slider1_ValueChanged(object sender, RoutedPropertyChangedEventArgs<double> e)
{
var offset = slider1.Value - this.lastSliderValue;
this.lastSliderValue = slider1.Value;
// Insert your old algorithm here, but replace occurences of "slider1.Value" by "offset"
}
This second way can cause a few headaches though. Your algorithm is capping the RGB values to 255. In those cases, you are losing information and cannot revert back to the old state. For instance, take the extreme example of a slider value of 255. The algorithm sets all the pixels to 255, thus generating a white picture. Then you reduce the slider to 0, which should in theory restore the original picture. In this case, you'll subtract 255 to each pixel, but since every pixel's value is 255 you'll end up with a black picture.
Therefore, except if you find a clever way to solve the issue mentionned in the second solution, I'd recommand going with the first one.
is there any possiblity in J2ME to convert an image (loaded from a png file with alpha) to a new transparent grayscale image?
Until now I only got the rgb values, but not the alpha.
Thanks.
Edit: yes, it should be 32 bit grayscale.
I found the solution and here is the code:
public Image getGrayScaleImage() {
int[] rgbData = new int[getWidth() * getHeight()];
image.getRGB(rgbData, 0, getWidth(), 0, 0, getWidth(), getHeight());
for (int x = 0; x < getWidth() * getHeight(); x++) {
rgbData[x] = getGrayScale(rgbData[x]);
}
Image grayImage = Image.createRGBImage(rgbData, getWidth(), getHeight(), true);
return grayImage;
}
private int getGrayScale(int c) {
int[] p = new int[4];
p[0] = (int) ((c & 0xFF000000) >>> 24); // Opacity level
p[1] = (int) ((c & 0x00FF0000) >>> 16); // Red level
p[2] = (int) ((c & 0x0000FF00) >>> 8); // Green level
p[3] = (int) (c & 0x000000FF); // Blue level
int nc = p[1] / 3 + p[2] / 3 + p[3] / 3;
// a little bit brighter
nc = nc / 2 + 127;
p[1] = nc;
p[2] = nc;
p[3] = nc;
int gc = (p[0] << 24 | p[1] << 16 | p[2] << 8 | p[3]);
return gc;
}
getRGB returns the color value that also includes the alpha channel. So I only had to change each value in the array and create an image from that.
I found a helpful document in the nokia forum: MIDP 2.0: Working with Pixels and drawRGB()
Thanks for the code on converting to greyscale. However, I noticed that, on Nokia Series 40 devices, this code runs rather slowly.
There are 2 optimisations . The main one is to remove any object creation in getGrayScale(). Currently , an array object is created for every pixel . For an average ,say QVGA, display that is 76800 array objects created, which is a lot of garbage, and will probably invoke GC. Defining the int[4] as a field in the class removes this object creation. The trade off here is a small amount of additional RAM used for the class.
The second is to cache the width and height into getGrayScaleImage(). On some devices, the method calls to getWidth() and getHeight() will be repeatedly called without optimisations ( a JIT compiler will be OK but some interpreted devices not ). So , again for QVGA, the getWidth() and getHeight() will be called some >150000 between them.
In all, I found this modified version ran much quicker :-)
public Image getGrayScaleImage(Image screenshot) {
int width = getWidth();
int height = getHeight();
int screenSizeInPixels = (width * height);
int[] rgbData = new int[width * height];
image.getRGB(rgbData, 0, width, 0, 0, width, height);
for (int x = 0; x < screenSizeInPixels ; x++) {
rgbData[x] = getGrayScale(rgbData[x]);
}
Image grayImage = Image.createRGBImage(rgbData, width, height, true);
return grayImage;
}
static int[] p = new int[4];
private int getGrayScale(int c) {
p[0] = (int) ((c & 0xFF000000) >>> 24); // Opacity level
p[1] = (int) ((c & 0x00FF0000) >>> 16); // Red level
p[2] = (int) ((c & 0x0000FF00) >>> 8); // Green level
p[3] = (int) (c & 0x000000FF); // Blue level
int nc = p[1] / 3 + p[2] / 3 + p[3] / 3;
// a little bit brighter
nc = nc / 2 + 127;
p[1] = nc;
p[2] = nc;
p[3] = nc;
int gc = (p[0] << 24 | p[1] << 16 | p[2] << 8 | p[3]);
return gc;
}
( If you really don't want to use class data space, just replace the int[] with four separate local int variables which would live on the stack)