replace each pixel by median of its 4 neighbourhood [closed] - image

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
i made this code can anybody tell me is this right how can i verify it.
I=imread('cameraman.tif');
[M N]=size(I);
for i=2:M-1
for j=2:N-1
x=I(i-1,j);
y=I(i+1,j);
z=I(i,j-1);
zz=I(i,j+1);
A=[x y z zz];
J(i,j)=median(A);
end
end

In general, the only way you can discover whether it does what you expect is to try whether it works. #Maroun already described this.
Here are some of the things I noticed:
I believe the code has no techical problems.
I am no expert on the topic but it would surprise me if you don't want to consider the middle point I(i,j) when determining the median. Now you just check left right up and down. Besides this you may want to evaluate the upleft upright downleft and downright. This is a choice however.
Another thing to notice is that currently your result will be smaller than the original image. Probably you want to start with J=I or J = NaN(size(I));

Related

How can you calculate formulas like this in ruby? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I can do it with pen & paper but I'm having a really hard time trying to build methods in ruby that calculates simple formulas like the one below.
How can I build a method in ruby that returns CET?
(To make it easier consider (dj-d0)/365 equals j)
This can be translated almost literally to Ruby if you know the structures to use:
sum = (1..n).inject(0) do |s, j|
fc[j] / ((1 + cet) ** (d[j] - d[0]) / 365) - fc[0]
end
If you want to solve for something, that's another story. You might want to try Mathematica.

Why is it so difficult to program a true random number generator? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I don't understand why a PRNG is easier to program than a true RNG. Shouldn't a typical processor make short work of producing a truly random number?
Computers are deterministic machines, given the same input, code included, they will produce the same result. To get true randomness you need to introduce something random from the real world, like the time or cosmic rays or something else that you can't predict.

Cutting rectangular pieces of a rectangular paper and minimizing the wastage. [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
A rectangular piece of paper is given of W*H(widht * height). One is supposed to cut rectangular pieces out of it. The list (having 'k' elements) of size of the pieces is given. The size of the pieces is given by w*h. All the numbers are integers.
The cut must go from one end to the other.
There could be any number of pieces of the listed sizes(including no piece).
The aim is to use as much paper as possible, i.e minimize wastage.
Can anyone suggest me how to approach this problem.
this is your typical knapsack problem. i will spare you the details here but you can get more info and ideas on how to approach it here
http://en.wikipedia.org/wiki/Knapsack_problem

Kahan summation and relative errors; or real life war stories of "getting the wrong result instead of the correct one" [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am interested in "war stories" like this one:
I wrote a program involving the sum of floating point numbers but I did not use the Kahan summation.
The sum was bad_sum and the program gave me a wrong result.
A colleague of mine, more versed than me in numerical analysis, had a look at the code and suggested me to use the Kahan summation, the sum is now good_sum and the program gives me the correct result.
I am interested in real-life production-code, not in code samples "artificially" created in order to explain the Kahan summation algorithm.
In particular what was the relative error (bad_sum-good_sum)/good_sum for your application?
Up to now I have no similar story to tell. Maybe I will make some tests (running my program on an input data set, logging the program results and the sums with and without Kahan, estimate the relative error).

Hash stable to small changes in text [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Is there a hash function that is stable to small changes in text? I'm looking for the opposite of a cryptographic hash, where small changes in the source lead to huge changes in the result.
Something like a perceptual hash for text. Is there such a thing?
Edited: by "small changes in text" I mean changes in punctuation, correction of ortographic / grammatical mistakes, etc. The text itself is an article, like a wikipedia entry (but it can be much smaller, like 2 or 3 paragraphs).
Bonus points if somebody can point to a Python implementation.
You're looking for locality sensitive hashing.

Resources