I am using Qt 4.8.6 to display multiple radar videos.
For now i am getting about 4096 azimuths (360°) per 2.5 seconds and video.
I display my image using a class inherited from QGraphicsObject (see here), using one of the RGB-Channels for each video.
Per Azimuth I get the angle and an array of 8192 rangebins and my image has the size of 1024x1024 pixels. I now check for every pixel (i am going through every x-coordinate and check the max y- and min y-coordinate for every azimuth and pixel coordinate), which rangebins are present at that pixel and write the biggest data into my image-array.
My problems
The calculating of every azimuth lasts about 1ms, which is way too slow. (I get two azimuths every about 600 microseconds, later there may be even more video channels.)
I want to zoom and move my image and for now have thought about two methods to do that:
Using an image array in full size and zoom and move the QGraphicsscene directly/"virtual"
That would cause the array to have a size of 16384x16384x4 bytes, which is way too big (i can not manage to allocate enough space)
Save multiple images for different scalefactors and offsets, but for that I would need my transforming algorithm to calculate multiple times (which is already slow) and causing the zoom and offset to display only after the full 2.5 seconds
Can you think of any better methods to do that?
Are there any standard rules, how I can check my algorithm for better performance?
I know that is a very special question, but since my mentor is not at work for the next days, I will take a try here.
Thank you!
I'm not sure why you are using a QGraphicsScene for the scenario you are doing. Have you considered turning your data into a raster image, and presenting the data as a bitmap?
Related
Just a straight forward question. I´m trying to make the best possible choice here and there is too much information for a "semi-beginner" like me.
Well, at this point, I´m trying with screen size values for my layout (activity_main.xml (normal, large, small)) and with different densities (xhdpi, xxhdpi, mhdpi) and, if a can say so myself, it is a mess. Do I have to create every possible option to support all screen sizes and densities? Or am I doing something really wrong here? what is the best approach for this?
My layouts are now activity_main(normal_land_xxhdpi) and I have serious doubts about it.
I´m using last version of android studio of course. My app is a single activity with buttons, textview and others. Does not have any fragments or intents whatsoever, and for that reason I think this has to be an easy task, but not for me.
Hope you guys can help. I don't think i need to put any code here, but if needed, i can add it.
If you want to make a responsive UI for every device you need to learn about some things first:
-Difference between PX, DP:
https://developer.android.com/training/multiscreen/screendensities
Here you can understand that dp is a standard measure which android uses to calculate how much pixels, lets say a line, should have to keep the proportions between different screensizes with different densities.
-Resolution, Density and Ratio:
The resolution is how much pixels a screen has over height and width. This pixels can be smaller or bigger, so for instance, if you have a screen A with 10x10 px whose pixels are two times smaller than other screen B with 10 x 10 pixels too, A is two times smaller than B even when both have 10 x 10 px.
For that reason exists the meaning Density, which is how much pixels your screen has for every inch, so you can measure the quality of a screen where most pixels per inch (ppi) is better.
Ratio tells you how much pixels are for height as well as width, for example the ratio of a screen of 1000 x 2000 px is 1:2, a full hd screen of 1920 x 1080 is 16:9 (16 pixels height for every 9 pixels width). A 1:1 ratio is a square screen.
-Standard device's resolutions
You can find the most common measurements on...
https://material.io/resources/devices/
When making a UI, you use the DP measurements. You will realize that even when resolution between screens are different, the DP is the same cause they have different densities.
Now, the right way is going with constraint layout using dp measures to put your views on screen, with correct constraints the content will adapt to other screen sizes
Anyway, you will need to make additional XML for some cases:
-Different orientation
-Different ratio
-Different DP resolution (not px)
For every activity, you need to provide a portrait and landscape design. If other device has different ratio, maybe you will need to adjust the height or width due to the proportions of the screens aren't the same. Finally, even if the ratio is the same, the DP resolution could be different, maybe you designed an activity for a 640x360dp and other device has 853x480dp, which means you will have more vertical space.
You can read more here:
https://developer.android.com/training/multiscreen/screensizes
And learn how to use constraintLayout correctly:
https://developer.android.com/training/constraint-layout?hl=es-419
Note:
Maybe it seems to be so much work for every activity, but you make the first design and then you just need to copy the design to other xml with some qualifiers and change the dp values to adjust the views as you wants (without making from scratch) which is really faster.
I remember a story about someone filtering images with a spam filter which he fed with some training data.
I come to the point where I exactly need something like this.
I have a lot different types of images (mainly people, e.g. selfies, group pictures, portraits, ..) but I only want a certain type (e.g. only male) of them.
With the right algorithm and training data I think it's possible to get it to the point where I can pass an image to it and i get true or false whether it matches my type or not.
I had a look at a few Face/Gender Detection APIs, but none of them worked for me that's why I want to try the approach with the spam-filter - seems like a funny idea.
Here's what I need:
a trainable spam-filter algorithm/code sample/API
has to work offline
preferably for C# or Java
I already spent a few hours trying different things and googling, now I'm here and I'd like to get your opinion on this problem and the solution you think is appropriate.
Buddha
There is a simple image comparison algorithm that you can read about here: compareImages php class.
Basically the way it works is this:
it takes an image (a cropped image would be best), scales it down to a 8x8 pixels image, converts it to a BW / Greyscale image, and then it calculates the mean value of the pixels (which is the average value).
Then it goes over all the pixels of the scaled image (64 pixels), and in every pixel where the pixel's value >= the mean value, it puts "1", and if the pixel's value < the mean value, it puts "0", resulting in a 64bit "signature" value of 0s and 1s.
This signature value is what identifies the image, and then you can save this signature value in some kind of a database, as your "learned" filter.
Then if an email arrives with some images.. you can just crop them, and scan them, produce a signature, and see if it matches any known signature in your database.
The good things about this algorithm are:
It is very fast and scalable (scaling an image down to 8x8 is fast, and scanning the pixels as described is fast too).
Because it converts the image to greyscale & resizes it down, it means it can detect any color variations or sizes of the same image.
Because you use 64bit signatures, it doesn't take alot of space in your database as well.
Hope this helps.
My app uses an atlas and reaches parts of it to display items using glTexCoordPointer.
It works well with power-of-two textures, but I wanted to use NPOT to reduce the amount of memory used.
Actually, the picture itself is well loaded with the linear filter and clamp-to-edge wrapping (the content displayed comes from the pic, even with alpha), but the display is deformed.
The coordinates are not the correct ones, and the "shape" is more a trapezoid than a rectangle.
I guessed I had to play with glEnable(), passing GL_TEXTURE_2D in the case of a POT texture, and GL_APPLE_texture_2D_limited_npot in the other case, but I cannot find a way to do so.
Also, I do not have the GL_TEXTURE_RECTANGLE_ARB, I don't know if it is an issue...
Anyone had the same kind of problem ?
Since OpenGL-2 (i.e. for about 10 years) there are no longer constraints on the size of a regular texture. You can use whatever image size you want, it will just work.
I have a large array of points, which updates dynamically. For the most part, only certain (relatively small) parts of the array get updated. The goal of my program is to build and display a picture using these points.
If I build a picture directly from the points it would be 8192 x 8192 pixels in size. I believe an optimization would be to reduce the array in size. My application has two screen areas (the one is a magnification/zooming in of the other). Additionally I will need to pan this picture in either of screen areas.
My approach for optimization is as follows.
Take a source array of points and reduce it with scaling factor for the first screen area
Same for the second area, but with larger scaling factor
Render there two arrays in two FBOs
Using FBOs as a textures (to provide ability to pan a picture)
When updating a picture I re-render only changed area.
Suggest ways to speed this up as my current implementation runs extremely slow.
You will hardly be able to optimize this a lot if you don't have the hardware to run it at an adequate rate. Even if you render in different threads to FBOs and then compose the result, your bottleneck is likely to remain. 67 million data points is nothing to sneeze at, even for modern GPUs.
Try not to update unnecessarily, update only what changes, render only what's updated and visible, try to minimize the size of your components, e.g. use a shorter data type if possible.
I feel like I have a very typical problem with image comparison, and my googles are not revealing answers.
I want to transmit still images of a desktop every X amount of seconds. Currently, we send a new image if the old and new differ by even one pixel. Very often only something very minor changes, like the clock or an icon, and it would be great if I could just send the changed portion to the server and update the image (way less bandwidth).
The plan I envision is to get a rectangle of an area that has changed. For instance, if the clock changed, screen capture the smallest rectangle that encompasses the changes, and send it to the server along with its (x, y) coordinate. The server will then update the old image by overlaying the rectangle at the coordinate specified.
Is there any algorithm or library that accomplishes this? I don't want it to be perfect, let's say I'll always send a single rectangle that encompasses all the changes (even if many smaller rectangles would be more efficient).
My other idea was to get a diff between the new and old images that's saved as a series of transformations. Then, I would just send the series of transformations to the server, which would then apply this to the old image to get the new image. Not sure if this is even possible, just a thought.
Any ideas? Libraries I can use?
Compare every pixel of the previous frame with every pixel of the next frame, and keep track of which pixels have changed?
Since you are only looking for a single box to encompass all the changes, you actually only need to keep track of the min-x, min-y (not necessarily from the same pixel), max-x, and max-y. Those four values will give you the edges of your rectangle.
Note that this job (comparing the two frames) should really be off-loaded to the GPU, which could do this significantly faster than the CPU.
Note also that what you are trying to do is essentially a home-grown lossless streaming video compression algorithm. Using one from an existing library would not only be much easier, but also probably much more performant.
This is from algorithms point of view. Not sure if this is easier to implement.
Basically XOR the two images and compress using any information theory algorithm (huffman coding?)
I know am very late responding but I found this question today.
I have done some analysis on Image Differencing but the code was written for java. Kindly look into the below link that may come to help
How to find rectangle of difference between two images
The code finds differences and keeps the rectangles in a Linkedlist. You can use the linkedlist that contains the Rectangles to patch the differences on to the Base Image.
Cheers !