Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm trying to implement a piece of paper, but I got stuck in a part called elevation filter!
here is that part of this article:
Does anyone know how to write it in MATLAB?
What you are asking is strongly related to what in image processing is called watershed transform (or wikipedia).
According to watershed approach, a grayscale image is seen as a topographic relief and it is filled with water. Doing so, different regions of the image can be separated, according to the way different basins join once filled with water.
If watershed is your final aim, the image processing toolbox has an implementation for it. Here.
In principle, in your problem, given a local minimum q, height(p), for p close to q, solves the minimization problem
height(p) = inf_{g} \int_g ||grad I (g) || dg
where g is curve which joins p and q and I is your image.
For more mathematical details, you can consider, for instance, this paper.
For implementation details, matlab, for instance, should have mex code.
You can make use of the Image Processing Toolbox function watershed to compute your elevation. I'll start with a simple example 300-by-300 matrix to represent your height data, shown in the first row of the figure below:
height = repmat(uint8([1:100 99:-1:50 51:1:150 149:-1:100]), 300, 1);
Each row has the same profile in this case. We can compute a basin matrix using watershed:
basin = watershed(height)+1;
This is shown in the second row of the figure below. Note that there are crests that are assigned a default value of 1 because they fall on the edges of the basins. You'll have to decide for yourself how you want to deal with these, as here I end up just lumping all the edges together into a pseudo-basin.
Next, we can use accumarray to compute a minima matrix (shown in the third row of the figure below) that maps the minimum value of each basin to all the points in that basin:
minValues = accumarray(basin(:), height(:), [], #min);
minima = minValues(basin);
And finally, the elevation can be computed like so (result shown in last row of figure below):
elevation = height - minima;
Figure:
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am trying to find the algorithm or even the general idea behind some of the effects used in Photoshop. Specifically, the palette knife effect that simplifies the colors of an image. For instance, the image bellow:
turns into something like this:
I want for each group of pixels that have similar color, to turn into a simple block of one or two colors (in real time) as happens in Photoshop. Any idea of a method to do this is appreciated.
Following tucuxi's suggestion, I could run a classification algorithm like kNN to pick K main colors for each image (frame in the video) and then change each pixel's color the the closest one from the k representatives. I am going to put the code here, and I appreciate any suggestions for improving it.
Since you want to choose representative colors, you can proceed as follows:
choose K colors from among N total present in the image
for each pixel in the image, replace it with its nearest color within the K chosen
To achieve step 1, you can run a k-nearest-neighbors over the actual color-space. In an WxH image, you have WxH pixels, each with a color. You choose K random colors to act as centroids, add the closest pixels to each, and after a while, you finish up with K different colors that more-or-less represent the most important colors of the image (in terms of being not too far from all others). Note that this is only one possible clustering algorithm - I am sure a lot of literature exists on alteratives and their relative merits.
Step 2 is comparatively much easier. For each original pixel, calculate distance to each of the K chosen colors, and replace it by the closest one.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
please help me.
I'm looking for a simple algorithm that its input is a single image and that's it. The output will be a depth map of the image with colors of pixels according to if they are near or far from the camera.
I am looking for a simple solution without Machine Learning, 3D model, sterioscopic input or user input help. Only a single image.
thank you
What you are asking is in general an ill posed problem.
However, recent work with deep-networks have shown that a depth map can be predicted from a single image.
Here's one such paper: Depth Map Prediction from a Single Image
using a Multi-Scale Deep Network.
From the abstract:
Predicting depth is an essential component in understanding the 3D
geometry of a scene. While for stereo images local correspondence
suffices for estimation, finding depth relations from a single image
is less straightforward, requiring integration of both global and
local information from various cues. Moreover, the task is inherently
ambiguous, with a large source of uncertainty coming from the overall
scale. In this paper, we present a new method that addresses this task
by employing two deep network stacks: one that makes a coarse global
prediction based on the entire image, and another that refines this
prediction locally. We also apply a scale-invariant error to help
measure depth relations rather than scale. By leveraging the raw
datasets as large sources of training data, our method achieves
state-of-the-art results on both NYU Depth and KITTI, and matches
detailed depth boundaries without the need for superpixelation.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm working on a project which involves detecting the red blood cells in the blood. RBCs in the blood are never perfectly circular (usually almost eliptical) and they often overlap.
I've searched and found a number of algorithms, but most work for circles only. However, in my case it needs to work for blood from patients with sickle cell disease, where the RBCs are elongated or sickle-shaped. For reference here is an example source image.
Can you suggest an algorithm or approach to solve this problem?
Any help would be greatly appreciated.
As mentioned in the comments, this question is really too broad to answer completely. However, I can give you some pointers in how to address this.
For starters, get yourself the MATLAB Image Processing toolbox.
"Identify red blood cells" is a deceptively simple-sounding task. The first step with any project like this is to figure out what exactly you want to achieve, then start breaking it down into steps of how you will achieve that. Finally, there is the experimental-developmental stage where you try and implement your plan (realise what is wrong with it, then try again).
Cell counting normally uses circularity to identify cells, but that's not possible here because you state you want to identify sickle cells. The other main characteristics distinguishing RBCs from other cells is the colour and size. The colour is more absolute, so start with that. Then think about size. This is a good tutorial on the process of identifying cells although it is in Python the principle is the same.
So we have:
Apply a filter to your image, either isolating the red channel (RGB) or something more complex. Make it monochrome (we don't need colour data).
Smooth the image (e.g. gaussian filter) to reduce the noise and artefacts
Find regional maxima which are (hopefully!) in the center of cells
Label the regional maxima (this should give you the number of cells)
Watershed to find the whole cells an measure size
Hopefully that is enough to get you started!
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
i want to find position of a point within grid after changing the grid shape
For Example:
In the below image i have a grid with dimensions (10,10),(20,10),(10,20),(20,20).
There is point (x) within grid and that point position is (17,13)
Now i change the dimension of the grid so grid shape gets changed
Previous grid shape get changed like this
After changing the shape of the grid
Grid dimensions are (8,8),(18,12),(12,18),(22,19)
Now what will be the position of the point (x)?
Can anyone explain the way to find the solution or
algorithm to find the current position of point..
Thanks in advance..
A basic idea:
Draw a line from one corner through the point. Record the point in the side it goes through.
Do the same for a neighbouring corner.
For the transformed square, draw lines between the same corners and where their lines went through the sides.
Where the lines cross should be where the point belongs.
A few notes:
As per its definition, a line extends infinitely.
You need to use neighbouring corners (as mentioned). If you use opposing corners, and the point is on the line between the corners, you won't be able to narrow it down beyond that line. Actually if the point can be on one of the sides, using neighbouring corners will give the same problem. In this case you'll need 3 corners.
This works because 2 lines can only + must have 1 crossing point, unless they are parallel (which can only happen in the above scenario - 2 lines containing the same point must be equal or non-parallel). If we add another corner, due to the square-shape, it can't be parallel to the other 2 lines.
Another special case pops up if 3 corners can end up on the same line. In this case, you'll need to use all 4 corners. But if all 4 corners can end up on the same line, this won't work, but in this case the resulting shape will just be a line segment.
You can also use distances, just remember to use ratio's instead of actual distances due to shape distortion.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Can you help me find any information on how to detect horizon on image?
It should not be based on genetic algorithm or neural network.
Just found this question interesting, so I searched the internet for you and came up with following papers/links, the first one perhaps being the most interesting, as it provides a concrete algorithm.
Towards Flight Autonomy: Vision-Based Horizon Detection for Micro Air Vehicles (PDF at citeseer)
Attitude Estimation for a Fixed-Wing Aircraft Using Horizon Detection and Optical Flow (PDF)
Following the citations in the papers you will get to more resources on research in this field.
I'm not sure does this works. But my first approach would be - to detect most frequent line by using Hough transform with such properties:
line should extend up to image boundaries.
line divides image into two regions such that in one of them standard deviation of color is small.
Following procedure will detect horizon:-
Change RGB image to grayscale.
Find the edges in the image using Canny edge detector. Adjust the sigma of gaussian filter.
Apply hough transform on it.
Select the line segment with highest value of J (Equation 5) in
Towards Flight Autonomy: Vision-Based Horizon Detection for Micro Air Vehicles