I regularly record data at 30 Hz for one hour at a time (over 100,000 frames). I am recording an object that moves around minimally (less than a hundred pixels). Each frame is stored as a .tiff file.
I'm trying to efficiently generate a "stabilization matrix." For example, if the image has shifted two pixels to the left and one pixel up, then I want to generate a matrix with a row as:
[2, 1]
to signify that image needs to be shifted 2 units on the x-axis, and one unit on the y-axis. Each row in the matrix would represent the necessary "shift" in this way.
Is this possible? I am open to using any language or platform. I also have access to a cluster at my university. I've been passed down a code that's written in Matlab, but it takes about 12 hours to run. I'm hoping to find a more efficient solution. Any pointers in the right direction would be greatly appreciated.
Thanks in advance.
Related
I‘m working on a visual data logger for my DMM, it writes every measurement to RS232 inteface. There I connect a Teensy 3.6 and collect the data points.
For each point I have the timestamp and the measured value. I will collect 10.000 readings.
I want to display the measured data on a display (800x480) in two ways. First as a rolling graph, that scrolls from right to left and shows the last minute or so. This is working fine.
Second, I want to display all collected measurements in total (max. 10k points). So I have to shrink or compress the data, but I want to preserve the shape of the curve.
To give you an idea how it should look like, please watch the video from Dave on EEV at YT (https://youtu.be/SObqPuUozNo) and skip to 41:20. There you see how another DMM is shrinking the incomming data and displays it. At about 1:01:05 10k measurements are shown on the display area of only 400px wide.
Question is, how is this done?
I’ve heard about Douglas-Pucker algorithm, but have no idea if this is the right way and how to use it on the Arduino/ Teensy platform.
Any help is very welcome, thank you....
I cannot just display all data points, because I‘m using an FT81x as display controller, and this can take only up to 2000 drawing commands per frame. And it takes more time.
Anyway, I solved the problem using the simple way.
I create bins and calculate the min and max values in this bin. Then simply draw a line between these points. Works fine!
BTW, I‘m the TO :-)
For cases where you got many more samples than pixels in x axis instead of LineTo like graph use vertical lines graph instead...
So depending on the number of samples per rendered time frame and x resolution you should compute ymin and ymax for eaxch x and render vertical line ...
something like:
xs=800;
for (x0=x,i=sample_left,y0=y1=sample[i],i<sample_right;i++)
{
x = (i-sample_left)*xs/(sample_right-sample_left);
y = sample[i]; // here add y scaling and offset
if (x0!=x) { line(x0,y0,x0,y1); x0=x; y0=y; y1=y; }
if (y0>y) y0=y;
if (y1<y) y1=y;
}
where sample[] are your stored values , sample_left,sample_right is the range to render and xs is graph x resolution. To speed up you can pre-compute the y0,y1 for each x and render that (recompute only on range or samples change) ... So as you can see you will use just xs line commands which shoul dbe fast enough. The x linear interpolation can be done without multiplication nor division if you rewrite it to integer DDA style ...
These QAs might interest you:
plotting real time Data on (qwt )Oscillocope
i don't really understand FFT and sample rates
[note]
After a second look The deleted answer is with the same approach as this (got deleted by review probably before the edit which transformed it from not an answer (comment) to the correct answer) so I voted for undelete even if it is considerably lower quality than mine but was posted sooner.
I am using Qt 4.8.6 to display multiple radar videos.
For now i am getting about 4096 azimuths (360°) per 2.5 seconds and video.
I display my image using a class inherited from QGraphicsObject (see here), using one of the RGB-Channels for each video.
Per Azimuth I get the angle and an array of 8192 rangebins and my image has the size of 1024x1024 pixels. I now check for every pixel (i am going through every x-coordinate and check the max y- and min y-coordinate for every azimuth and pixel coordinate), which rangebins are present at that pixel and write the biggest data into my image-array.
My problems
The calculating of every azimuth lasts about 1ms, which is way too slow. (I get two azimuths every about 600 microseconds, later there may be even more video channels.)
I want to zoom and move my image and for now have thought about two methods to do that:
Using an image array in full size and zoom and move the QGraphicsscene directly/"virtual"
That would cause the array to have a size of 16384x16384x4 bytes, which is way too big (i can not manage to allocate enough space)
Save multiple images for different scalefactors and offsets, but for that I would need my transforming algorithm to calculate multiple times (which is already slow) and causing the zoom and offset to display only after the full 2.5 seconds
Can you think of any better methods to do that?
Are there any standard rules, how I can check my algorithm for better performance?
I know that is a very special question, but since my mentor is not at work for the next days, I will take a try here.
Thank you!
I'm not sure why you are using a QGraphicsScene for the scenario you are doing. Have you considered turning your data into a raster image, and presenting the data as a bitmap?
I'm building a photographic film scanner. The electronic hardware is done now I have to finish the mechanical advance mechanism then I'm almost done.
I'm using a line scan sensor so it's one pixel width by 2000 height. The data stream I will be sending to the PC over USB with a FTDI FIFO bridge will be just 1 byte values of the pixels. The scanner will pull through an entire strip of 36 frames so I will end up scanning the entire strip. For the beginning I'm willing to manually split them up in Photoshop but I would like to implement something in my program to do this for me. I'm using C++ in VS. So, basically I need to find a way for the PC to detect the near black strips in between the images on the film, isolate the images and save them as individual files.
Could someone give me some advice for this?
That sounds pretty simple compared to the things you've already implemented; you could
calculate an average pixel value per row, and call the resulting signal s(n) (n being the row number).
set a threshold for s(n), setting everything below that threshold to 0 and everything above to 1
Assuming you don't know the exact pixel height of the black bars and the negatives, search for periodicities in s(n). What I describe in the following is total overkill, but that's how I roll:
use FFTw to calculate a discrete fourier transform of s(n), call it S(f) (f being the frequency, i.e. 1/period).
find argmax(abs(S(f))); that f represents the distance between two black bars: number of rows / f is the bar distance.
S(f) is complex, and thus has an argument; arctan(imag(S(f_max))/real(S(f_max)))*number of rows will give you the position of the bars.
To calculate the width of the bars, you could do the same with the second highest peak of abs(S(f)), but it'll probably be easier to just count the average length of 0 around the calculated center positions of the black bars.
To get the exact width of the image strip, only take the pixels in which the image border may lie: r_left(x) would be the signal representing the few pixels in which the actual image might border to the filmstrip material, x being the coordinate along that row). Now, use a simplistic high pass filter (e.g. f(x):= r_left(x)-r_left(x-1)) to find the sharpest edge in that region (argmax(abs(f(x)))). Use the average of these edges as the border location.
By the way, if you want to write a source block that takes your scanned image as input and outputs a stream of pixel row vectors, using GNU Radio would offer you a nice method of having a flow graph of connected signal processing blocks that does exactly what you want, without you having to care about getting data from A to B.
I forgot to add: Use the resulting coordinates with something like openCV, or any other library capable of reading images and specifying sub-images by coordinates as well as saving to new images.
So I’m trying to find the rotational angle for stripe lines in images like the attached photo.
The only assumption is that the lines are parallel, and their orientation is about 90 degrees approximately more or less [say 5 degrees tolerance].
I have to make sure the stripe lines in the result image will be %100 vertical. The quality of the images varies as well as their histogram/greyscale values. So methods based on non-adaptive thresholding already failed for my cases [I’m not interested in thresholding based methods if I cannot make it adaptive]. Also, there are some random black clusters on top of the stripe lines sometimes.
What I did so far:
1) Of course HoughLines is the first option, but I couldn’t make it work for all my images, I had some partial success though following this great article:
http://felix.abecassis.me/2011/09/opencv-detect-skew-angle/.
The main reason of failure to my understanding was that, I needed to fine tune the parameters for different images. Parameters such as Canny/BW/Morphological edge detection (If needed) | parameters for minLinelength/maxLineGap/etc. For sure there’s a way to hack into this and make it work, but, to me this is a fragile solution!
2) What I’m working on right now, is to divide the image to a top slice and a bottom slice, then find the peaks and valleys of each slice. Then basically find the angle using the width of the image and translation of peaks. I’m currently working on finding which peak of the top slice belongs to which of the bottom slice, since there will be some false positive peaks in my computation due to existence of black/white clusters on top of the strip lines.
Example: Location of peaks for slices:
Top Slice = { 1, 33,67,90,110}
BottomSlice = { 3, 14, 35,63,90,104}
I am actually getting similar vectors when extracting peaks. So as can be seen, the length of vector might vary, any idea how can I get a group like:
{{1,3},{33,35},{67,63},{90,90},{110,104}}
I’m open to any idea about improving any of these algorithms or a completely new approach. If needed, I can upload more images.
If you can get a list of points for a single line, a linear regression will give you a formula for the straight line that best fits the points. A simple trig operation will convert the line formula to an angle.
You can probably use some line thinning operation to turn the stripes into a list of points.
You can run an accumulator of spatial derivatives along different angles. If you want a half-degree precision and a sample of 5 lines, you have a maximum 10*5*1500 = 7.5m iterations. You can safely reduce the sampling rate along the line tenfold, which will give you a sample size of 150 points per sample, reducing the number of iterations to less than a million. Somewhere around that point the operation of straightening the image ought to become the bottleneck.
I have the following problem, I'm working with gel electrophoresis images [A][B] which show DNA fragments (appear as white bands). I want to extract them and analyze them (on the right site is a standard of known size and concentration, which can be extrapolate to the other three samples). Each sample is loaded into a lane. One task is to find the lanes (in this case 4) and the other to extract at which position in the picture a DNA band is present.
I have some problems with finding the bands. I tried already several things, e.g. pixel comparison, edge detection, corner detection, template matching, binary image, but all of them give insufficient results especially if the pictures are bad (might be a bad ran, kind of smearing[C]) or if the bands are to close tot each other.
Since I'm not an image expert, could someone drop some keywords what is usually used in such cases? Actually I'm even not sure whether the problem is about image segmentation or pattern recognition?!
Any hints would be highly appreciated (also books for beginners).
Thanks in advance!
[A] http://en.wikipedia.org/wiki/Gel_electrophoresis
[B]
[C]
In this case, profile extraction will probably do the trick: take a vertical slice of the image across a lane (assuming you have a rough idea of the position), and average the pixel values on every row of the slice. This will give you a 1D signal where the bands appear as distinct peaks of varying heights.
You can detect the peak locations by looking for local maxima (not so robust here), or better by finding sufficiently long increasing and decreasing signal value sequences.
I would more call this a segmentation problem.
Final hint: the lanes might also be located by analysing the profile obtained by averaging on the columns.