MATLAB Greyscale 12 bit to 8 bit - image

I'm trying to create an algorithm to convert a greyscale from 12 bit to 8 bit.
I got a greyscale like this one:
The scale is represented in a Matrix. The problem is, that the simple multiplication with 1/16 destroys the first grey-columns.
Here the Codeexample:
in =[
1 1 1 3 3 3 15 15 15 63 63 63;
1 1 1 3 3 3 15 15 15 63 63 63;
1 1 1 3 3 3 15 15 15 63 63 63;
1 1 1 3 3 3 15 15 15 63 63 63
];
[zeilen spalten] = size(in);
eight = round(in/16);
imshow(uint8(eight));
Destroy mean, that the New Columns are Black now

Simply rescale the image so that you divide every single element by the maximum possible intensity that corresponds to a 12-bit (or 2^12 - 1 = 4095) unsigned integer and then multiply by the maximum possible intensity that corresponds to an 8-bit unsigned integer (or 2^8 - 1 = 255).
Therefore:
out = uint8((255.0/4095.0)*(double(in)));
You need to cast to double to ensure that you maintain floating point precision when performing this scaling, and then cast to uint8 so that the image type is ensured to be 8-bit. You have cleverly deduced that this scaling factor is roughly (1/16) (since 255.0/4095.0 ~ 1/16). However, the output of your test image will have its first 6 columns to surely be zero because intensities of 1 and 3 for a 12-bit image are just too small to be represented in its equivalent 8-bit form, which is why it gets rounded down to 0. If you think about it, for every 16 intensity increase that you have for your 12-bit image, this registers as an equivalent single intensity increase for an 8-bit image, or:
12-bit --> 8-bit
0 --> 0
15 --> 1
31 --> 2
47 --> 3
63 --> 4
... --> ...
4095 --> 255
Because your values of 1 and 3 are not high enough to get to the next level, these get rounded down to 0. However, your values of 15 get mapped to 1, and the values of 63 get mapped to 4, which is what we expect when you run the above code on your test input.

Related

How do I create strictly ordered uniformly distributed buckets out of an array?

I'm looking to take an array of integers and perform a partial bucket sort on that array. Every element in the bucket before it is less than the current bucket elements. For example, if I have 10 buckets for the values 0-100 0-9 would go in the first bucket, 10-19 for the second and so on.
For one example I can take 1 12 23 44 48 and put them into 4 buckets out of 10. But if I have 1, 2, 7, 4, 9, 1 then all values go into a single bucket. I'm looking a way to evenly distribute values to all the buckets while maintaining a ordering. Elements in each bucket don't have to be sorted. For example I'm looking similar to this.
2 1 9 2 3 8 7 4 2 8 11 4 => [[2, 1], [2, 2], [3], [4], [4], [7], [8, 8], [9], [11]]
I'm trying to use this as a quick way to partition a list in a map-reduce.
Thanks for the help.
Edit, maybe this clears things up:
I want to create a hashing function where all elements in bucket1 < bucket2 < bucket3 ..., where each bucket is unsorted.
If I understand it correctly you have around 100TB of data, or 13,743,895,347,200 unsigned 64-bit integers, that you want to distribute over a number of buckets.
A first step could be to iterate over the input, looking at e.g. the highest 24 bits of each integer, and counting them. That will give you a list of 16,777,216 ranges, each with a count of on average 819,200 so it may be possible to store them in 32-bit unsigned integers, which will take up 64 MB.
You can then use this to create a lookup table that tells you which bucket each of those 16,777,216 ranges goes into. You calculate how many integers are supposed to go into each bucket (input size divided by number of buckets) and go over the array, keeping a running total of the count, and set each range to bucket 1, until the running total is too much for bucket 1, then you set the ranges to bucket 2, and so on...
There will of course always be a range that has to be split between bucket n and bucket n+1. To keep track of this, you create a second table that stores how many integers in these split ranges are supposed to go into bucket n+1.
So you now have e.g.:
HIGH 24-BIT RANGE BUCKET BUCKET+1
0 0 ~ 2^40-1 1 0
1 2^40 ~ 2*2^40-1 1 0
2 2*2^40 ~ 3*2^40-1 1 0
3 3*2^40 ~ 4*2^40-1 1 0
...
16 16*2^40 ~ 17*2^40-1 1 0
17 17*2^40 ~ 18*2^40-1 1 284,724 <- highest 284,724 go into bucket 2
18 18*2^40 ~ 19*2^40-1 2 0
...
You can now iterate over the input again, and for each integer look at the highest 24 bits, and use the lookup table to see which bucket the integer is supposed to go into. If the range isn't split, you can immediately move the integer into the right bucket. For each split range, you create an ordered list or priority queue that can hold as many integers as need to go into the next bucket; you store only the highest values in this list or queue; any smaller integer goes straight to the bucket, and if an integer is added to the full list or queue, the smallest value is moved to the bucket. At the end this list or queue is added to the next bucket.
The number of ranges should be as high as possible with the available memory, because that minimises the number of integers in split ranges. With the huge input you have, you may need to save the split ranges to disk, and then afterwards look at each of them seperately, find the highest x values, and move them to the buckets accordingly.
The complexity of this is N for the first run, then you iterate over the ranges R, then N as you iterate over the input again, and then for the split ranges you'll have something like M.logM to sort and M to distribute, so a total of 2*N + R + M.LogM + M. Using a high number of ranges to keep the number of integers in split ranges low will probably be the best strategy to speed the process up.
Actually, the number of integers M that are in split ranges depends on the number of buckets B and ranges R, with M = N × B/R, so that e.g. with a thousand buckets and a million ranges, 0.1% of the input would be in split ranges and have to be sorted. (These are averages, depending on the actual distribution.) That makes the total complexity 2×N + R + (N×B/R).Log(N×B/R) + N×B/R.
Another example:
Input: N = 13,743,895,347,200 unsigned 64-bit integers
Ranges: 232 (using the highest 32 bits of each integer)
Integers per range: 3200 (average)
Count list: 232 16-bit integers = 8 GB
Lookup table: 232 16-bit integers = 8 GB
Split range table: B 16-bit integers = 2×B bytes
With 1024 buckets, that would mean that B/R = 1/222, and there are 1023 split ranges with around 3200 integers each, or around 3,276,800 integers in total; these will then have to be sorted and distributed over the buckets.
With 1,048,576 buckets, that would mean that B/R = 1/212, and there are 1,048,575 split ranges with around 3200 integers each, or around 3,355,443,200 integers in total. (More than 65,536 buckets would of course require a lookup table with 32-bit integers.)
(If you find that the total of the counts per range doesn't equal the total size of the input, there has been overflow in the count list, and you should switch to a larger integer type for the counts.)
Let's run through a tiny example: 50 integers in the range 1-100 have to be distributed over 5 buckets. We choose a number of ranges, say 20, and iterate over the input to count the number of integers in each range:
2 9 14 17 21 30 33 36 44 50 51 57 69 75 80 81 87 94 99
1 9 15 16 21 32 40 42 48 55 57 66 74 76 88 96
5 6 20 24 34 50 52 58 70 78 99
7 51 69
55
3 4 2 3 3 1 3 2 2 3 5 3 0 4 2 3 1 2 1 3
Then, knowing that each bucket should hold 10 integers, we iterate over the list of counts per range, and assign each range to a bucket:
3 4 2 3 3 1 3 2 2 3 5 3 0 4 2 3 1 2 1 3 <- count/range
1 1 1 1 2 2 2 2 3 3 3 4 4 4 4 5 5 5 5 5 <- to bucket
2 1 1 <- to next
When a range has to be split between two buckets, we store the number of integers that should go to the next bucket in a seperate table.
We can then iterate over the input again, and move all the integers in non-split ranges into the buckets; the integers in split ranges are temporarily moved into seperate buckets:
bucket 1: 9 14 2 9 1 15 6 5 7
temp 1/2: 17 16 20
bucket 2: 21 33 30 32 21 24 34
temp 2/3: 36 40
bucket 3: 44 50 48 42 50
temp 3/4: 51 55 52 51 55
bucket 4: 57 75 69 66 74 57 57 70 69
bucket 5: 81 94 87 80 99 88 96 76 78 99
Then we look at the temp buckets one by one, find the x highest integers as indicated in the second table, move them to the next bucket, and what is left over to the previous bucket:
temp 1/2: 17 16 20 (to next: 2) bucket 1: 16 bucket 2: 17 20
temp 2/3: 36 40 (to next: 1) bucket 2: 36 bucket 3: 40
temp 3/4: 51 55 52 51 55 (to next: 1) bucket 3: 51 51 52 55 bucket 4: 55
And the end result is:
bucket 1: 9 14 2 9 1 15 6 5 7 16
bucket 2: 21 33 30 32 21 24 34 17 20 36
bucket 3: 44 50 48 42 50 40 51 51 52 55
bucket 4: 57 75 69 66 74 57 57 70 69 55
bucket 5: 81 94 87 80 99 88 96 76 78 99
So, out of 50 integers, we've had to sort a group of 3, 2 and 5 integers.
Actually, you don't need to create a table with the number of integers in the split ranges that should go to the next bucket. You know how many integers are supposed to go into each bucket, so after the initial distribution you can look at how many integers are already in each bucket, and then add the necessary number of (lowest value) integers from the split range. In the example above, which expects 10 integers per bucket, that would be:
3 4 2 3 3 1 3 2 2 3 5 3 0 4 2 3 1 2 1 3 <- count/range
1 1 1 / 2 2 2 / 3 3 / 4 4 4 4 5 5 5 5 5 <- to bucket
bucket 1: 9 14 2 9 1 15 6 5 7 <- add 1
temp 1/2: 17 16 20 <- 3-1 = 2 go to next bucket
bucket 2: 21 33 30 32 21 24 34 <- add 3-2 = 1
temp 2/3: 36 40 <- 2-1 = 1 goes to next bucket
bucket 3: 44 50 48 42 50 <- add 5-1 = 4
temp 3/4: 51 55 52 51 55 <- 5-4 = 1 goes to next bucket
bucket 4: 57 75 69 66 74 57 57 70 69 <- add 1-1 = 0
bucket 5: 81 94 87 80 99 88 96 76 78 99 <- add 0
The calculation of how much of the input will be in split ranges and need to be sorted, given above as M = N × B/R, is an average for input that is roughly evenly distributed. A slight bias, with more values in a certain part of the input space will not have much effect, but it would indeed be possible to craft worst-case input to thwart the algorithm.
Let's look again at this example:
Input: N = 13,743,895,347,200 unsigned 64-bit integers
Ranges: 232 (using the highest 32 bits of each integer)
Integers per range: 3200 (average)
Buckets: 1,048,576
Integers per bucket: 13,107,200
For a start, if there are ranges that contain more than 232 integers, you'd have to use 64-bit integers for the count table, so it would be 32GB in size, which could force you to use fewer ranges, depending on the available memory.
Also, every range that holds more integers than the target size per bucket is automatically a split range. So if the integers are distributed with a lot of local clusters, you may find that most of the input is in split ranges that need to be sorted.
If you have enough memory to run the first step using 232 ranges, then each range has 232 different values, and you could distribute the split ranges over the buckets using a counting sort (which has linear complexity).
If you don't have the memory to use 232 ranges, and you end up with problematically large split ranges, you could use the complete algorithm again on the split ranges. Let's say you used 228 ranges, expecting each range to hold around 51,200 integers, and you end up with an unexpectedly large split range with 5,120,000,000 integers that need to be distributed over 391 buckets. If you ran the algorithm again for this limited range, you'd have 228 ranges (each holding on average 19 integers with a maximum of 16 different values) for just 391 buckets, and only a tiny risk of ending up with large split ranges again.
Note: the ranges that have to be split over two or more buckets don't necessarily have to be sorted. You can e.g. use a recursive version of Dijkstra's Dutch national flag algorithm to partition the range into a part with the x smallest values, and a part with the largest values. The average complexity of partitioning would be linear (when using a random pivot), against the O(N.LogN) complexity of sorting.

Where is my mistake in this answer to Project Euler #58?

I am solving project Euler question 58. Here a square is created by starting with 1 and spiralling anticlockwise in the following way (here is side length equal to 7:
37 36 35 34 33 32 31
38 17 16 15 14 13 30
39 18 5 4 3 12 29
40 19 6 1 2 11 28
41 20 7 8 9 10 27
42 21 22 23 24 25 26
43 44 45 46 47 48 49
The question is to find out when we keep spiralling around the square, when the ratio of primes in the diagonals and the amount of numbers in the diagonal is smaller than 0.10.
I am convinced I have the solution with the code below (see code comments for clarification), but the site states that the answer is wrong when I am entering it.
require 'prime'
# We use a mathematical derivation of the corner values, keep increasing the value till we find a ratio smaller
# than 0.10 and increase the grid_size and amount of numbers on diagonals each iteration
side_length = 3 # start with grid size of 3x3 so that we do not get into trouble with 1x1 grid
prime_count = 3 # 3, 5, 7 are prime and on a diagonal in a 3x3 grid
diagonal_size = 5
prime_ratio = 1 # dummy value bigger than 0.10 so we can start the loop
while prime_ratio >= 0.10
# Add one to prime count for each corner if it is prime
# Corners are given by n2 (top left), n2-n+1, n2-2n+2, and n2-3n+3
prime_count += 1 if (side_length**2).prime?
prime_count += 1 if (side_length**2-side_length+1).prime?
prime_count += 1 if (side_length**2-2*side_length+2).prime?
prime_count += 1 if (side_length**2-3*side_length+3).prime?
# Divide amount of primes counted by the diagonal length to get prime ratio
prime_ratio = prime_count/diagonal_size.to_f
# Increase the side length by two (full spiral) and diagonal size by four
side_length += 2 and diagonal_size += 4
end
puts side_length-2 #-2 to account for last addition in while-loop
# => 26612
It probably is wrong and site is right. I am stuck on this problem for quite some time now. Can anyone point me the mistake?
side_length += 2 and diagonal_size += 4 should be at the beginning of the loop.
Couldn't check, I do not have ruby installed, but I can reproduce the same problem on my python solution.

Cumulative Maxima as Indicated by X in APL

The third item in the FinnAPL Library is called “Cumulative maxima (⌈) of subvectors of Y indicated by X ” where X is a binary vector and Y os a vector of numbers. Here's an example of its usage:
X←1 0 0 0 1 0 0 0
Y←9 78 3 2 50 7 69 22
Y[A⍳⌈\A←⍋A[⍋(+\X)[A←⍋Y]]] ⍝ output 9 78 78 78 50 50 69 69
You can see that beginning from either the beginning or from any 1 value in the X array, the cumulave maximum is found for all corresponding digits in Y until another 1 is found in X. In the example given, X is divding the array into two equal parts of 4 numbers each. In the first part, 9 is the maxima until 78 is encountered, and in the second part 50 is the maxima until 69 is encountered.
That's easy enough to understand, and I could blindly use it as is, but I'd like to understand how it works, because APL idioms are essentially algorithms made up of operators and functions. To understand APL well, it's important to understand how the masters were able to weave it all together into such compact and elegant lines of code.
I find this particular idiom especially hard to understand because of the indexing nested two layers deep. So my question is, what makes this idiom tick?
This idiom can be broken down into smaller idioms, and most importantly, it contains idiom #11 from the FinnAPL Library entitled:
Grade up (⍋) for sorting subvectors of Y indicated by X
Using the same values for X and Y given in the question, here's an example of its usage:
X←1 0 0 0 1 0 0 0
Y←9 78 3 2 50 7 69 22
A[⍋(+\X)[A←⍋Y]] ⍝ output 4 3 1 2 6 8 5 7
As before, X is dividing the vector into two halves, and the output indicates, for each position, what digit of Y is needed to sort each of the halves. So, the 4 in the output is saying that it needs the 4th digit of Y (2) in the 1st position; the 3 indicates the 3rd digit (3) in the 2nd position; the 1 indicates the 1st digit (9) in the third position; etc. Thus, if we apply this indexing to Y, we get:
Y[A[⍋(+\X)[A←⍋Y]]] ⍝ output 2 3 9 78 7 22 50 69
In order to understand the indexing within this grade-up idiom, consider what is happening with the following:
(+\X)[A←⍋Y] ⍝ Sorted Cumulative Addition
Breaking it down step by step:
A←⍋Y ⍝ 4 3 6 1 8 5 7 2
+\X ⍝ 1 1 1 1 2 2 2 2
(+\X)[A←⍋Y] ⍝ 1 1 2 1 2 2 2 1 SCA
A[⍋(+\X)[A←⍋Y]] ⍝ 4 3 1 2 6 8 5 7
You can see that sorted cumulative addition (SCA) of X 1 1 2 1 2 2 2 1 applied to A acts as a combination of compress left and compress right. All values of A that line up with a 1 are moved to the left, and those lining up with a 2 move to the right. Of course, if X had more 1s, it would be compressing and locating the compressed packets in the order indicated by the values of the SCA result. For example, if the SCA of X were like 3 3 2 1 2 2 1 1 1, you would end up with the 4 digits corresponding to the 1s, followed by the 3 digits corresponding to the 2s, and finally, the 2 digits corresponding to the 3s.
You may have noticed that I skipped the step that would show the effect of grade up ⍋:
(+\X)[A←⍋Y] ⍝ 1 1 2 1 2 2 2 1 SCA
⍋(+\X)[A←⍋Y] ⍝ 1 2 4 8 3 5 6 7 Grade up
A[⍋(+\X)[A←⍋Y]] ⍝ 4 3 1 2 6 8 5 7
The effect of compression and rearrangement isn't accomplised by SCA alone. It effectively acts as rank, as I discussed in another post. Also in that post, I talked about how rank and index are essentially two sides of the same coin, and you can use grade up to switch between the two. Therefore, that is what is happening here: SCA is being converted to an index to apply to A, and the effect is grade-up sorted subvectors as indicated by X.
From Sorted Subvectors to Cumulative Maxima
As already described, the result of sorting the subvectors is an index, which when applied to Y, compresses the data into packets and arranges those packets according to X. The point is that it is an index, and once again, grade up is applied, which converts indexes into ranks:
⍋A[⍋(+\X)[A←⍋Y]] ⍝ 3 4 2 1 7 5 8 6
The question here is, why? Well, the next step is applying a cumulative maxima, and that really only makes sense if it is applied to values for rank which represent relative magnitude within each packet. Looking at the values, you can see that 4 is is the maxima for the first group of 4, and 8 is for the second group. Those values correspond to the input values of 78 and 69, which is what we want. It doesn't make sense (at least in this case) to apply a maxima to index values, which represent position, so the conversion to rank is necessary. Applying the cumulative maxima gives:
⌈\A←⍋A[⍋(+\X)[A←⍋Y]] ⍝ 3 4 4 4 7 7 8 8
That leaves one last step to finish the index. After doing a cumulative maxima operation, the vector values still represent rank, so they need to be converted back to index values. To do that, the index-of operator is used. It takes the value in the right argument and returns their position as found in the left argument:
A⍳⌈\A←⍋A[⍋(+\X)[A←⍋Y]] ⍝ 1 2 2 2 5 5 7 7
To make it easier to see:
3 4 2 1 7 5 8 6 left argument
3 4 4 4 7 7 8 8 right argument
1 2 2 2 5 5 7 7 result
The 4 is in the 2nd position in the left argument, so the result shows a 2 for every 4 in the right argument. The index is complete, so applying it to Y, we get the expected result:
Y[A⍳⌈\A←⍋A[⍋(+\X)[A←⍋Y]]] ⍝ 9 78 78 78 50 50 69 69
My implementation:
X←1 0 0 0 1 0 0 0
Y←9 78 3 2 50 7 69 22
¯1+X/⍳⍴X ⍝ position
0 4
(,¨¯1+X/⍳⍴X)↓¨⊂Y
9 78 3 2 50 7 69 22 50 7 69 22
(1↓(X,1)/⍳⍴X,1)-X/⍳⍴X ⍝ length
4 4
(,¨(1↓(X,1)/⍳⍴X,1)-X/⍳⍴X)↑¨(,¨¯1+X/⍳⍴X)↓¨⊂Y
9 78 3 2 50 7 69 22
⌈\¨(,¨(1↓(X,1)/⍳⍴X,1)-X/⍳⍴X)↑¨(,¨¯1+X/⍳⍴X)↓¨⊂Y
9 78 78 78 50 50 69 69
∊⌈\¨(,¨(1↓(X,1)/⍳⍴X,1)-X/⍳⍴X)↑¨(,¨¯1+X/⍳⍴X)↓¨⊂Y
9 78 78 78 50 50 69 69
Have a nice day.

sorting 2d points into a matrix

I have the following problem:
An image is given and I am doing some blob detection. As a limit, lets say I have a max of 16 blobs and from each blob I calculate the centroid (x,y position).
If no distorion happends, these centroids are arranged in an equidistant 4x4 grid but they could be really much distorted.
The assumption is that they keep more or less the grid form but they could be really much warped.
I need to sort the blobs such that I know which one is the nearest left, right, up and down. So the best would be to write these blobs into a matrix.
If this is not enough, it could happen that I detect less then 16 and then I also need to sort them into a matrix.
Does anyone know how this could be efficiently solved in Matlab?
Thanks.
[update 1:]
I uploaded an image and the red numbers are the numbers which my blob detection algorithm assign each blob.
The resulting matrix should look like this with these numbers:
1 2 4 3
6 5 7 8
9 10 11 12
13 16 14 15
e.g. I start with blob 11 and the nearest right number is 12 and so on
[update 2:]
The posted solution looks quite nice. In reality it could happen, that one of the outer spots is missing or maybe two ... I know that this makes everything much more complicated and I just want to get a feeling if this is worth spending time.
These problems arise if you analyze a wavefront with a shack-hartmann wavefront sensor and you want to increase the dynamic range :-)
The spots could be really warped such that the dividing lines are not orthogonal any more.
Maybe someone knows a good literature for classification algorithms.
Best solution would be one, which could be implemented on a FPGA without to much effort but this is at this stage not so much important.
This will work as long as the blobs form a square and are relatively ordered:
Image:
Code:
bw = imread('blob.jpg');
bw = im2bw(bw);
rp = regionprops(bw,'Centroid');
% Must be a square
side = sqrt(length(rp));
centroids = vertcat(rp.Centroid);
centroid_labels = cellstr(num2str([1:length(rp)]'));
figure(1);
imshow(bw);
hold on;
text(centroids(:,1),centroids(:,2),centroid_labels,'Color','r','FontSize',60);
hold off;
% Find topleft element - minimum distance from origin
[~,topleft_idx] = min(sqrt(centroids(:,1).^2+centroids(:,2).^2));
% Find bottomright element - maximum distance from origin
[~,bottomright_idx] = max(sqrt(centroids(:,1).^2+centroids(:,2).^2));
% Find bottom left element - maximum normal distance from line formed by
% topleft and bottom right blob
A = centroids(bottomright_idx,2)-centroids(topleft_idx,2);
B = centroids(topleft_idx,1)-centroids(bottomright_idx,1);
C = -B*centroids(topleft_idx,2)-A*centroids(topleft_idx,1);
[~,bottomleft_idx] = max(abs(A*centroids(:,1)+B*centroids(:,2)+C)/sqrt(A^2+B^2));
% Sort blobs based on distance from line formed by topleft and bottomleft
% blob
A = centroids(bottomleft_idx,2)-centroids(topleft_idx,2);
B = centroids(topleft_idx,1)-centroids(bottomleft_idx,1);
C = -B*centroids(topleft_idx,2)-A*centroids(topleft_idx,1);
[~,leftsort_idx] = sort(abs(A*centroids(:,1)+B*centroids(:,2)+C)/sqrt(A^2+B^2));
% Reorder centroids and redetermine bottomright_idx and bottomleft_idx
centroids = centroids(leftsort_idx,:);
bottomright_idx = find(leftsort_idx == bottomright_idx);
bottomleft_idx = find(leftsort_idx == bottomleft_idx);
% Sort blobs based on distance from line formed by bottomleft and
% bottomright blob
A = centroids(bottomright_idx,2)-centroids(bottomleft_idx,2);
B = centroids(bottomleft_idx,1)-centroids(bottomright_idx,1);
C = -B*centroids(bottomleft_idx,2)-A*centroids(bottomleft_idx,1);
[~,bottomsort_idx] = sort(abs(A*reshape(centroids(:,1),side,side)+B*reshape(centroids(:,2),side,side)+C)/sqrt(A^2+B^2),'descend');
disp(leftsort_idx(bsxfun(#plus,bottomsort_idx,0:side:side^2-1)));
Output:
2 12 13 20 25 31
4 11 15 19 26 32
1 7 14 21 27 33
3 8 16 22 28 34
6 9 17 24 29 35
5 10 18 23 30 36
Just curious, are you using this to automate camera calibration through a checkerboard or something?
UPDATE:
For skewed image
tform = maketform('affine',[1 0 0; .5 1 0; 0 0 1]);
bw = imtransform(bw,tform);
Output:
1 4 8 16 21 25
2 5 10 18 23 26
3 6 13 19 27 29
7 9 17 24 30 32
11 14 20 28 33 35
12 15 22 31 34 36
For rotated image:
bw = imrotate(bw,20);
Output:
1 4 10 17 22 25
2 5 12 18 24 28
3 6 14 21 26 31
7 9 16 23 30 32
8 13 19 27 33 35
11 15 20 29 34 36

fourier and zero padding

I'm filtering an image using a mask and the Discret Fourier Trasform, till now i have this
A=double(imread('C:\Users\samsung\Documents\Lab Imagenes\CHE.jpg','jpg'));
B=[1 4 6 4 1; 4 16 24 16 4; 6 24 36 24 6; 4 16 24 16 4; 1 4 6 4 1];
F=(1/256).*(B);
DFT_A=fftshift(fft2(A));
imshow(DFT_A);
DFT_A_F=DFT_A.*F;
figure
imshow(DFT_A_F)
but when i want to see partial results I got this error
??? Error using ==> times
Matrix dimensions must agree.
Error in ==> fourier1 at 10
DFT_A_F=DFT_A.*F;
I know that i need to do zero padding to the mask, but i don't know how to do it, please I need help
Thanks!
what you want is called 'padarray' , just after you define DFT_A:
padsize= [round(0.5*size(DFT_A,1)-0.5*size(F,1)) round(0.5*size(DFT_A,2)-0.5*size(F,2))];
F = padarray(F, padsize);
DFT_A_F=DFT_A.*F;
...
But why won't you just (given that A is a 2D matrix, so rgb2gray it if needed):
DFT_A_F = conv2(A,B,'same');
It is faster, because you don't need to multiply all these zeros, and should get you the same result.

Resources