Automatically detect an image collage - image

I'm trying to automatically detect if an image is a collage vs a single photograph. I'm not too concerned with edge cases. What I'm trying to solve is rectangular collages like below. I've tried edge detection (canny) + vertical and horizontal sobel filtering + line detection (Hough transform) to try to identify perpendicular lines but am getting too many false positives. I'm not very good at image processing so any input would be welcome. Thx!

This is a challenge, because images "have the right" to contain verticals and horizontals, and there can be sudden changes in the background due to occlusion for instance.
But a clue can help you: the borders between two pictures are perfectly straight, "unnaturally" straight, and they are long.

Related

MATLAB image processing - edge detection algorithm

Hi, first picture represents the damage and delamination that follows
What I want to do is remove the intact area and only visualize the damage area that is marked by black curves (I want everything to be white, or blank except the damage area)
I tried Thresholding method but doesn't seem to be effective
So I found out that histogram equalization and laplacian of gaussian filter are useful for edge detection image processing.
Are there any other image processing tool to get what I want?
Will histogram equalization or laplacian of gaussian filter be good enough?
Any tips are welcome!
Thanks in advance guys

rendering thousands of unique text labels using three.js

I have a THREE.PointCloud that contains a large amount of points. In the screenshot above, each point is mapped to small purpleish circle.
I'm trying to figure out a way to display text labels beside each points.
My first attempt was to dynamically create text via the canvas and display the textures inside THREE.Sprite objects. This achieved the look I was going for, but the performance hit was significant. I quickly learned this wouldn't scale past just a few hundred points.
I'm thinking there may be a way to do this with shaders but I cannot figure out the approach. The method used in animating a million letters used a texture of glyphs, and mapped the letters to the glyphs:
I'm thinking I could do the same by creating a THREE.Geometry object and pushing the vertices and faces for each text letter of the labels. The downside is that I want the labels to billboard so that they always face the camera. Billboarding in the vertex shader seems to be pretty straightforward.
My feeling is that there are already some examples out there that combine all of these ideas into a single working example. Any suggestions on how to do large-scale text labels would be greatly appreciated. Thanks!
I figured it out. You have to use a glyph sheet and render each character as a quad using two faces.
The process is outlined pretty well in animating a million letters
I can get away with about a million on-screen characters before performance starts taking a hit.

Blob detection using connected components labelling algorithm

I'm trying to write an algorithm to detect blobs using connected component labeling on an image. I'm having difficulty on how to merge different labels if they are connected diagonally. Doing it for horizontally and vertically connected pixels seems easy . But i can't figure out a way to detect the pixels that are connected diagonally .because if it changes then there is a need to relabel the image w.r.t to that change for each changed pixel . I'm confused . Can you explain me how to handle this. I may be totally wrong about what i said (but i have achieved reasonable results doing only the horizontal and vertical connected components) , but it isn't the correct way. please advice to how to approach connected component labeling accurately. I'm only using arrays of image dimensions for comparing and labelling.
Well you are able to do horizontal and vertical, just add another direction for the diagonal, so you are looking for neighboring pixels in all directions as opposed just to vertical and horizontal.

how to limit an image to it's real shape

How can I use an image that whenever I want to make a collision in XNA, it happens only for area of the shape not around of it.
For example when I use below picture, I want collision detection happens only when arrow in shape is touched.
The collision detection happens in the area in this picture
How can I make limitation for area of image only?
What you can do is also to create two rectangles. That makes the overlapping area (the area there the image isn't but the rectangle) a bit smaller. But if you need to do this pixel excact you have to use the recource-expensive per-pixel-collision.
You shouldn't try restricting the image shape, because regardless of your efforts - you will have a rectangle. What you need to do is work with detecting pixel collisions. It is a fairly extensive topic - you can read more about a Windows Phone-specific XNA implementation here.

Generating fast color rectangles

I am designing a more powerful color picker for Qt and looking for some advice. How would one go about generating fast real-time color rectangles such as the ones found in Photoshop (for HSB and RGB). I was originally thinking of using QImage and scanline to calculate all the pixels individually but this would probably be too slow.
I was thinking it would be better to write an OpenGL shader. As I can recall you can assign colors to vertices and it would interpolate the changes for you. I just have no idea how this would be done in Qt or if this is even worth the effort.
I am using QGraphicsView to display the rectangle. Any advice would be appreciated.
Ok so looking into QGradients a bit more could you not use multiple QGradient to create the effect you need?
For the last of the 3 examples you could create a single gradient with multiple stops for the colours themselves then overlay this with a QGradient of black (alpha 0) to black (alpha 255) with apropriate stops to get the gradient to come in at the right point.

Resources