I'd like an algorithm to organize a 2D cloud of points in front of a bar graph so that a viewer could easily see the spread of the data. The y location of the point needs to be equal/scaled/proportional to the value of the data, but the x location doesn't matter and would be determined by the algorithm. I imagine a good strategy would be to minimize overlap among the points and center the points.
Here is an example of such a plot without organizing the points:
I generate my bar graphs with points in front of it with MATLAB, but I'm interested just in the best way to choose the x location values of the points.
I have been organizing the points by hand afterwards in Adobe Illustrator, which is time-consuming. Any recommendations? Is this a sub-problem of an already solved problem? What is this kind of plot called?
For high sample sizes, I imagine something like the following would be better than a cloud of points.
I think, mathematically, starting with some array of y-values, it would maximize the sum of the difference between every element from every other element, inversely scaled by the distance between the elements, by rearranging the order of the elements in the array.
Here is the MATLAB code I used to generate the graph:
y = zeros(20,6);
yMean = zeros(1,6);
for i=1:6
y(:,i) = 5 + (8-5).*rand(20,1);
yMean(i) = mean(y(:,i));
end
figure
hold on
bar(yMean,0.5)
for i=1:6
x = linspace(i-0.3,i+0.3,20);
plot(x,y(:,i),'ro')
end
axis([0,7,0,10])
Here is one way that determines x-locations based on grouping into (histogram) bins. The result is similar to e.g. the plot in https://stackoverflow.com/a/1934882/4720018, but retains the original y-values. For convenience the points are sorted, but they could be displayed in order of appearance using the bin_index. Whether this is "the best way" of choosing the x-coordinates depends on what you are trying to achieve.
% Create some dummy data
dummy_data_y = 1+0.1*randn(10,3);
% Create bar plot (assuming you are interested in the mean)
bar_obj = bar(mean(dummy_data_y));
% Obtain data size info
n = size(dummy_data_y, 2);
% Algorithm that creates an x vector for each data column
sorted_data_y = sort(dummy_data_y, 'ascend'); % for convenience
number_of_bins = 5;
for j=1:n
% Get histogram information
[bin_count, ~, bin_index] = histcounts(sorted_data_y(:, j), number_of_bins);
% Create x-location data for current column
xj = [];
for k = 1:number_of_bins
xj = [xj 0:bin_count(k)-1];
end
% Collect x locations per column, scale and translate
sorted_data_x(:, j) = j + (xj-(bin_count(bin_index)-1)/2)'/...
max(bin_count)*bar_obj.BarWidth;
end
% Plot the individual data points
line(sorted_data_x, sorted_data_y, 'linestyle', 'none', 'marker', '.', 'color', 'r')
Whether this is a good way to display your data remains open to discussion.
Related
I would like to return a list of the pixels that belongs to the same region, after clicking on one of them. The input would be the chosen pixel (seed) and the output would be a list of all pixels that have the same value and belongs to the same region (are not separatet by any pixel of different value).
My idea was to create an auxiliary list of seeds and check the neighbours of each of them. If the value of the neighbour is the same as of the seed, it is appended to the region list. My python implementation is below:
def region_growing(x, y):
value = image[x,y]
region = [(x,y),]
seeds = [(x,y),]
while seeds:
seed = seeds.pop()
x = seed[0]
y = seed[1]
for i in range(x-1, x+2):
for j in range(y-1, y+2):
if image[i,j] == value:
point = (i,j,z)
if point not in region:
seeds.append(point)
region.append(point)
return region
It works, but is very slow for bigger regions. What algorithm would you suggest?
The problem is the instruction if point not in region whose execution time will increase with the size of the region. The complexity is thus quadratic.
Another problem is that you visit the same pixels multiple times at the boundary of the region since you only keep track of pixels in the region.
You can avoid this by using a dictionary of visited pixels with the point as key.
def region_growing(x, y):
value = image[x,y]
region = [(x,y),]
seeds = [(x,y),]
visited = {(x,y):true}
while seeds:
seed = seeds.pop()
x = seed[0]
y = seed[1]
for i in range(x-1, x+2):
for j in range(y-1, y+2):
point = (i,j)
if point in visited:
continue
visited[point] = true
if image[i,j] == value:
region.append(point)
seeds.append(point)
return region
Another method is to use a matrix of booleans instead of the dictionary. This is faster but requires more memory space.
I can suggest you to use any region-fill/paint algorithm and patch it not to paint but to track pixels of the same region. The Smith's algorithm is known to be fast and efficient, see Tint Fill Algorithm.
Note that it is inefficient to store all pixels, but as the algorithm suggest horizontal segments are sufficient (thus only two pixels par segment).
I have a matrix named figmat from which I obtain the following pcolor plot (Matlab-Version R 2016b).
Basically I only want to extract the bottom red high intensity line from this plot.
I thought of doing it in some way of extracting the maximum values from the matrix and creating some sort of mask on the main matrix. But I'm not understanding a possible way to achieve this. Can it be accomplished with the help of any edge/image detection algorithms?
I was trying something like this with the following code to create a mask
A=max(figmat);
figmat(figmat~=A)=0;
imagesc(figmat);
But this gives only the boundary of maximum values. I also need the entire red color band.
Okay, I assume that the red line is linear and its values can uniquely be separated from the rest of the picture. Let's generate some test data...
[x,y] = meshgrid(-5:.2:5, -5:.2:5);
n = size(x,1)*size(x,2);
z = -0.2*(y-(0.2*x+1)).^2 + 5 + randn(size(x))*0.1;
figure
surf(x,y,z);
This script generates a surface function. Its set of maximum values (x,y) can be described by a linear function y = 0.2*x+1. I added a bit of noise to it to make it a bit more realistic.
We now select all points where z is smaller than, let's say, 95 % of the maximum value. Therefore find can be used. Later, we want to use one-dimensional data, so we reshape everything.
thresh = min(min(z)) + (max(max(z))-min(min(z)))*0.95;
mask = reshape(z > thresh,1,n);
idx = find(mask>0);
xvec = reshape(x,1,n);
yvec = reshape(y,1,n);
xvec and yvec now contain the coordinates of all values > thresh.
The last step is to do some linear polynomial over all points.
pp = polyfit(xvec(idx),yvec(idx),1)
pp =
0.1946 1.0134
Obviously these are roughly the coefficients of y = 0.2*x+1 as it should be.
I do not know, if this also works with your data, since I made some assumptions. The threshold level must be chosen carefully. Maybe some preprocessing must be done to dynamically detect this level if you really want to process your images automatically. There might also be a simpler way to do it... but for me this one was straight forward without the need of any toolboxes.
By assuming:
There is only one band to extract.
It always has the maximum values.
It is linear.
I can adopt my previous answer to this case as well, with few minor changes:
First, we get the distribution of the values in the matrix and look for a population in the top values, that can be distinguished from the smaller values. This is done by finding the maximum value x(i) on the histogram that:
Is a local maximum (its bin is higher than that of x(i+1) and x(i-1))
Has more values above it than within it (the sum of the height of bins x(i+1) to x(end) < the height of bin x):
This is how it is done:
[h,x] = histcounts(figmat); % get the distribution of intesities
d = diff(fliplr(h)); % The diffrence in bin height from large x to small x
band_min_ind = find(cumsum(d)>size(figmat,2) & d<0, 1); % 1st bin that fit the conditions
flp_val = fliplr(x); % the value of x from large to small
band_min = flp_val(band_min_ind); % the value of x that fit the conditions
Now we continue as before. Mask all the unwanted values, interpolate the linear line:
mA = figmat>band_min; % mask all values below the top value mode
[y1,x1] = find(mA,1); % find the first nonzero row
[y2,x2] = find(mA,1,'last'); % find the last nonzero row
m = (y1-y2)/(x1-x2); % the line slope
n = y1-m*x1; % the intercept
f_line = #(x) m.*x+n; % the line function
And if we plot it we can see the red line where the band for detection was:
Next, we can make this line thicker for a better representation of this line:
thick = max(sum(mA)); % mode thickness of the line
tmp = (1:thick)-ceil(thick/2); % helper vector for expanding
rows = bsxfun(#plus,tmp.',floor(f_line(1:size(A,2)))); % all the rows for each column
rows(rows<1) = 1; % make sure to not get out of range
rows(rows>size(A,1)) = size(A,1); % make sure to not get out of range
inds = sub2ind(size(A),rows,repmat(1:size(A,2),thick,1)); % convert to linear indecies
mA(inds) = true; % add the interpolation to the mask
result = figmat.*mA; % apply the mask on figmat
Finally, we can plot that result after masking, excluding the unwanted areas:
imagesc(result(any(result,2),:))
I am performing a whisker-tracking experiments. I have high-speed videos (500fps) of rats whisking against objects. In each such video I tracked the shape of the rat's snout and whiskers. Since tracking is noisy, the number of whiskers in each frame may be different (see 2 consecutive frames in attached image, notice the yellow false-positive whisker appearing in the left frame but not the right one).
See example 1:
As an end result of tracking, I get, for each frame, a varying number of variable-length vectors; each vector corresponding to a single whisker. At this point I would like to match the whiskers between frames. I have tried using Matlab's sample align to do this, but it works only somewhat properly. Its results are attached below (in attached image showing basepoint of all whiskers over 227 frames).
See example 2:
I would like to run some algorithm to cluster the whiskers correctly, such that each whisker is recognized as itself and separated from other over the course of many frames. In other words, I would like each slightly sinusoidal trajectory in the second image to be recognized as one trajectory. Whatever sorting algorithm I use should take into account that whiskers may disappear and reappear between consecutive frames. Unfortunately, I'm all out of ideas...
Any help?
Once again, keep in mind that for each point in attached image 2, I have many data points, since this is only a plot of whisker basepoint, while in actuality I have data for the entire whisker length.
This is how I would deal with the problem. Assuming that data vectors of different size are in a cell type called dataVectors, and knowing the number of whiskers (nSignals), I would try to extend the data to a second dimension derived from the original data and then perform k-means on two dimensions.
So, first I would get the maximum size of the vectors in order to convert the data to a matrix and do NaN-padding.
maxSize = -Inf;
for k = 1:nSignals
if length(dataVectors{k}.data) > maxSize
maxSize = length(dataVectors{k}.data);
end
end
Now, I would make the data 2D by elevating it to power of two (or three, your choice). This is just a very simple transformation. But you could alternatively use kernel methods here and project each vector against the rest; however, I don't think this is necessary, and if your data is really big, it could be inefficient. For now, raising the data to the power of two should do the trick. The result is stored in a second dimension.
projDegree = 2;
projData = zeros(nSignals, maxSize, 2).*NaN;
for k = 1:nSignals
vecSize = length(dataVectors{k}.data);
projData(k, 1:vecSize, 1) = dataVectors{k}.data;
projData(k, 1:vecSize, 2) = dataVectors{k}.data.*projDegree;
end
projData = reshape(projData, [], 2);
Here, projData will have in row 1 and column 1, the original data of the first whisker (or signal as I call it here), and column 2 will have the new dimension. Let's suppose that you have 8 whiskers in total, then, projData will have the data of the first whisker in row 1, 9, 17, and so on. The data of the second whisker in row 2, 10, 18, and so forth. That is important if you want to work your way back to the original data. Also, you can try with different projDegrees but I doubt it will make a lot of difference.
Now we perform k-means on the 2D data; however, we provide the initial points instead of letting it determine them with k-means++. The initial points, as I propose here, are the first data point of each vector for each whisker. In this manner, k-means will depart from there and will move to clusters means accordingly. We save the results in idxK.
idxK = kmeans(projData,nSignals, 'Start', projData(1:nSignals, :));
And there you have it. The variable idxK will tell you which data point belongs to what cluster.
Below is a working example of my proposed solution. The first part is simply trying to produce data that looks like your data, you can skip it.
rng(9, 'twister')
nSignals = 8; % number of whiskers
n = 1000; % number of data points
allData = zeros(nSignals, n); % all the data will be stored here
% this loop will just generate some data that looks like yours
for k = 1:nSignals
x = sort(rand(1,n));
nPeriods = round(rand*9)+1; % the sin can have between 1-10 periods
nShiftAmount = round(randn*30); % shift between ~ -100 to +100
y = sin(x*2*pi*nPeriods) + (randn(1,n).*0.5);
y = y + nShiftAmount;
allData(k, :) = y;
end
nanIdx = round(rand(1, round(n*0.05)*nSignals).*((n*nSignals)-1))+1;
allData(nanIdx) = NaN; % about 5% of the data is now missing
figure(1);
for k = 1:nSignals
nanIdx = ~isnan(allData(k, :));
dataVectors{k}.data = allData(k, nanIdx);
plot(dataVectors{k}.data, 'kx'), hold on;
end
% determine the max size
maxSize = -Inf;
for k = 1:nSignals
if length(dataVectors{k}.data) > maxSize
maxSize = length(dataVectors{k}.data);
end
end
% making the data now into two dimensions and NaN pad
projDegree = 2;
projData = zeros(nSignals, maxSize, 2).*NaN;
for k = 1:nSignals
vecSize = length(dataVectors{k}.data);
projData(k, 1:vecSize, 1) = dataVectors{k}.data;
projData(k, 1:vecSize, 2) = dataVectors{k}.data.*projDegree;
end
projData = reshape(projData, [], 2);
figure(2); plot(projData(:,1), projData(:,2), 'kx');
% run k-means using the first points of all measure as the initial points
idxK = kmeans(projData,nSignals, 'Start', projData(1:nSignals, :));
figure(3);
liColors = [{'yx'},{'mx'},{'cx'},{'bx'},{'kx'},{'gx'},{'rx'},{'gd'}];
for k = 1:nSignals
plot(projData(idxK==k,1), projData(idxK==k,2), liColors{k}), hold on;
end
% plot results on original data
figure(4);
for k = 1:nSignals
plot(projData(idxK==k,1), liColors{k}), hold on;
end
Let me know if this helps.
I am trying to create an android smartphone application which uses Apples iBeacon technology to determine the current indoor location of itself. I already managed to get all available beacons and calculate the distance to them via the rssi signal.
Currently I face the problem, that I am not able to find any library or implementation of an algorithm, which calculates the estimated location in 2D by using 3 (or more) distances of fixed points with the condition, that these distances are not accurate (which means, that the three "trilateration-circles" do not intersect in one point).
I would be deeply grateful if anybody can post me a link or an implementation of that in any common programming language (Java, C++, Python, PHP, Javascript or whatever). I already read a lot on stackoverflow about that topic, but could not find any answer I were able to convert in code (only some mathematical approaches with matrices and inverting them, calculating with vectors or stuff like that).
EDIT
I thought about an own approach, which works quite well for me, but is not that efficient and scientific. I iterate over every meter (or like in my example 0.1 meter) of the location grid and calculate the possibility of that location to be the actual position of the handset by comparing the distance of that location to all beacons and the distance I calculate with the received rssi signal.
Code example:
public Location trilaterate(ArrayList<Beacon> beacons, double maxX, double maxY)
{
for (double x = 0; x <= maxX; x += .1)
{
for (double y = 0; y <= maxY; y += .1)
{
double currentLocationProbability = 0;
for (Beacon beacon : beacons)
{
// distance difference between calculated distance to beacon transmitter
// (rssi-calculated distance) and current location:
// |sqrt(dX^2 + dY^2) - distanceToTransmitter|
double distanceDifference = Math
.abs(Math.sqrt(Math.pow(beacon.getLocation().x - x, 2)
+ Math.pow(beacon.getLocation().y - y, 2))
- beacon.getCurrentDistanceToTransmitter());
// weight the distance difference with the beacon calculated rssi-distance. The
// smaller the calculated rssi-distance is, the more the distance difference
// will be weighted (it is assumed, that nearer beacons measure the distance
// more accurate)
distanceDifference /= Math.pow(beacon.getCurrentDistanceToTransmitter(), 0.9);
// sum up all weighted distance differences for every beacon in
// "currentLocationProbability"
currentLocationProbability += distanceDifference;
}
addToLocationMap(currentLocationProbability, x, y);
// the previous line is my approach, I create a Set of Locations with the 5 most probable locations in it to estimate the accuracy of the measurement afterwards. If that is not necessary, a simple variable assignment for the most probable location would do the job also
}
}
Location bestLocation = getLocationSet().first().location;
bestLocation.accuracy = calculateLocationAccuracy();
Log.w("TRILATERATION", "Location " + bestLocation + " best with accuracy "
+ bestLocation.accuracy);
return bestLocation;
}
Of course, the downside of that is, that I have on a 300m² floor 30.000 locations I had to iterate over and measure the distance to every single beacon I got a signal from (if that would be 5, I do 150.000 calculations only for determine a single location). That's a lot - so I will let the question open and hope for some further solutions or a good improvement of this existing solution in order to make it more efficient.
Of course it has not to be a Trilateration approach, like the original title of this question was, it is also good to have an algorithm which includes more than three beacons for the location determination (Multilateration).
If the current approach is fine except for being too slow, then you could speed it up by recursively subdividing the plane. This works sort of like finding nearest neighbors in a kd-tree. Suppose that we are given an axis-aligned box and wish to find the approximate best solution in the box. If the box is small enough, then return the center.
Otherwise, divide the box in half, either by x or by y depending on which side is longer. For both halves, compute a bound on the solution quality as follows. Since the objective function is additive, sum lower bounds for each beacon. The lower bound for a beacon is the distance of the circle to the box, times the scaling factor. Recursively find the best solution in the child with the lower lower bound. Examine the other child only if the best solution in the first child is worse than the other child's lower bound.
Most of the implementation work here is the box-to-circle distance computation. Since the box is axis-aligned, we can use interval arithmetic to determine the precise range of distances from box points to the circle center.
P.S.: Math.hypot is a nice function for computing 2D Euclidean distances.
Instead of taking confidence levels of individual beacons into account, I would instead try to assign an overall confidence level for your result after you make the best guess you can with the available data. I don't think the only available metric (perceived power) is a good indication of accuracy. With poor geometry or a misbehaving beacon, you could be trusting poor data highly. It might make better sense to come up with an overall confidence level based on how well the perceived distance to the beacons line up with the calculated point assuming you trust all beacons equally.
I wrote some Python below that comes up with a best guess based on the provided data in the 3-beacon case by calculating the two points of intersection of circles for the first two beacons and then choosing the point that best matches the third. It's meant to get started on the problem and is not a final solution. If beacons don't intersect, it slightly increases the radius of each up until they do meet or a threshold is met. Likewise, it makes sure the third beacon agrees within a settable threshold. For n-beacons, I would pick 3 or 4 of the strongest signals and use those. There are tons of optimizations that could be done and I think this is a trial-by-fire problem due to the unwieldy nature of beaconing.
import math
beacons = [[0.0,0.0,7.0],[0.0,10.0,7.0],[10.0,5.0,16.0]] # x, y, radius
def point_dist(x1,y1,x2,y2):
x = x2-x1
y = y2-y1
return math.sqrt((x*x)+(y*y))
# determines two points of intersection for two circles [x,y,radius]
# returns None if the circles do not intersect
def circle_intersection(beacon1,beacon2):
r1 = beacon1[2]
r2 = beacon2[2]
dist = point_dist(beacon1[0],beacon1[1],beacon2[0],beacon2[1])
heron_root = (dist+r1+r2)*(-dist+r1+r2)*(dist-r1+r2)*(dist+r1-r2)
if ( heron_root > 0 ):
heron = 0.25*math.sqrt(heron_root)
xbase = (0.5)*(beacon1[0]+beacon2[0]) + (0.5)*(beacon2[0]-beacon1[0])*(r1*r1-r2*r2)/(dist*dist)
xdiff = 2*(beacon2[1]-beacon1[1])*heron/(dist*dist)
ybase = (0.5)*(beacon1[1]+beacon2[1]) + (0.5)*(beacon2[1]-beacon1[1])*(r1*r1-r2*r2)/(dist*dist)
ydiff = 2*(beacon2[0]-beacon1[0])*heron/(dist*dist)
return (xbase+xdiff,ybase-ydiff),(xbase-xdiff,ybase+ydiff)
else:
# no intersection, need to pseudo-increase beacon power and try again
return None
# find the two points of intersection between beacon0 and beacon1
# will use beacon2 to determine the better of the two points
failing = True
power_increases = 0
while failing and power_increases < 10:
res = circle_intersection(beacons[0],beacons[1])
if ( res ):
intersection = res
else:
beacons[0][2] *= 1.001
beacons[1][2] *= 1.001
power_increases += 1
continue
failing = False
# make sure the best fit is within x% (10% of the total distance from the 3rd beacon in this case)
# otherwise the results are too far off
THRESHOLD = 0.1
if failing:
print 'Bad Beacon Data (Beacon0 & Beacon1 don\'t intersection after many "power increases")'
else:
# finding best point between beacon1 and beacon2
dist1 = point_dist(beacons[2][0],beacons[2][1],intersection[0][0],intersection[0][1])
dist2 = point_dist(beacons[2][0],beacons[2][1],intersection[1][0],intersection[1][1])
if ( math.fabs(dist1-beacons[2][2]) < math.fabs(dist2-beacons[2][2]) ):
best_point = intersection[0]
best_dist = dist1
else:
best_point = intersection[1]
best_dist = dist2
best_dist_diff = math.fabs(best_dist-beacons[2][2])
if best_dist_diff < THRESHOLD*best_dist:
print best_point
else:
print 'Bad Beacon Data (Beacon2 distance to best point not within threshold)'
If you want to trust closer beacons more, you may want to calculate the intersection points between the two closest beacons and then use the farther beacon to tie-break. Keep in mind that almost anything you do with "confidence levels" for the individual measurements will be a hack at best. Since you will always be working with very bad data, you will defintiely need to loosen up the power_increases limit and threshold percentage.
You have 3 points : A(xA,yA,zA), B(xB,yB,zB) and C(xC,yC,zC), which respectively are approximately at dA, dB and dC from you goal point G(xG,yG,zG).
Let's say cA, cB and cC are the confidence rate ( 0 < cX <= 1 ) of each point.
Basically, you might take something really close to 1, like {0.95,0.97,0.99}.
If you don't know, try different coefficient depending of distance avg. If distance is really big, you're likely to be not very confident about it.
Here is the way i'll do it :
var sum = (cA*dA) + (cB*dB) + (cC*dC);
dA = cA*dA/sum;
dB = cB*dB/sum;
dC = cC*dC/sum;
xG = (xA*dA) + (xB*dB) + (xC*dC);
yG = (yA*dA) + (yB*dB) + (yC*dC);
xG = (zA*dA) + (zB*dB) + (zC*dC);
Basic, and not really smart but will do the job for some simple tasks.
EDIT
You can take any confidence coef you want in [0,inf[, but IMHO, restraining at [0,1] is a good idea to keep a realistic result.
I've been performing a 2D mode filter on an RGB image by running medfilt2 independently on the R,G and B channels. However, splitting the RGB channels like this gives artifacts in the colouring. Is there a way to perform the 2D median filter while keeping RGB values 'together'?
Or, I could explain this more abstractly: Imagine I had a 2D matrix, where each value contained a pair of index coordinates (i.e. a cell matrix of 2X1 vectors). How would I go about performing a median filter on this?
Here's how I can do an independent mode filter (giving the artifacts):
r = colfilt(r0,[5 5],'sliding',#mode);
g = colfilt(g0,[5 5],'sliding',#mode);
b = colfilt(b0,[5 5],'sliding',#mode);
However colfilt won't work on a cell matrix.
Another approach could be to somehow combine my RGB channels into a single number and thus create a standard 2D matrix. Not sure how to implement this, though...
Any ideas?
Thanks for your help.
Cheers,
Hugh
EDIT:
OK, so problem solved. Here's how I did it.
I adapted my question so that I'm no longer dealing with (RGB) vectors, but (UV) vectors. Still essentially the same problem, except that my vectors are 2D not 3D.
So firstly I load the individual U and V channels, arrange them each into a 1D list, then combine them, so I essentially have a list of vectors. Then I reduce it to just those which are unique. Then, I assign each pixel in my matrix the value of the index of that unique vector. After this I can do the mode filter. Then I basically do the reverse, in that I go through the filtered image pixelwise, and read the value at each pixel (i.e. an index in my list), and find the unique vector associated with that index and insert it at that pixel.
% Create index list
img_u = img_iuv(:,:,2);
img_v = img_iuv(:,:,3);
coordlist = unique(cat(2,img_u(:),img_v(:)),'rows');
% Create a 2D matrix of indices
img_idx = zeros(size(img_iuv,1),size(img_iuv,2),2);
for y = 1:length(Y)
for x = 1:length(X)
coords = squeeze(img_iuv(x,y,2:3))';
[~,idx] = ismember(coords,coordlist,'rows');
img_idx(x,y) = idx;
end
end
% Apply the mode filter
img_idx = colfilt(img_idx,[n,n],'sliding',#mode);
% Re-construct the original image using the filtered data
for y = 1:length(Y)
for x = 1:length(X)
idx = img_idx(x,y);
try
coords = coordlist(idx,:);
end
img_iuv(x,y,2:3) = coords(:);
end
end
Not pretty but it gets the job done. I suppose this approach would also work for RGB images, or other similar situations.
Cheers,
Hugh
I don't see how you can define the median of a vector variable. You probably need to reduce the R,G,B components to a single value and then compunte the median on that value. Why not use the intensity level as that single value? You could do it easily with rgb2gray.