The MSDN documentation is (somewhat) clear about the following two facts about GDI Pens:
A Cosmetic pen (create via CreatePen or ExtCreatePen w/ PS_COSMETIC) must be 1 unit wide (well, <= 1, but let's not go there).
A Geometric (ExtCreatePen w/ PS_GEOMETRIC) pen must solid (PS_SOLID only, no PS_DASH, etc). They can, however, draw fatter lines. This is clearly documented in the link I put above as only a 9x restriction (I'm dumb). To my defense (bad) comments and (broken) logic in my code led me to believe otherwise. Some other googled articles must have been written concidering only Windows 9x.
Why can I voilate these rules and have GDI happily draw with these Pens?
I can create fat (width = 10, for example) cosmetic pens and dashed Geometric pens. Heck, I can create a fat, dashed geometric pen!
These Pens seem to work fine usually. The only problem I've seen is in Polyline when I pass very large arrays of points - it renders the lines very slowly. However, Polyline is acting strangely with large arrays in general - it justs acts differently with the bad pens. (my other polyline problems may be another question...)
Is it ever safe to use wide Cosmetic pens or wide Geometric with patterns?
In general you should adhere to the documented API, otherwise you risk relying on OS version specific behaviour.
The ExtCreatePen restrictions you describe (e.g, no PS_DASH with PS_GEOMETRIC) only apply to Win9x, not WinNT, so on NT/2000/XP your "fat, dashed geometric pen" shouldn't be a problem. Also note that Polyline has some limitations on Win9x.
If you want dashed lines, I'd suggest using PS_USERSTYLE so that you control the lengths of the dashes and gaps, rather than relying on whatever default PS_DASH gives you.
Related
I'm looking for an algorithm (or algebra) for 2d box alignment. The alignment of (recursive) boxes is implemented in type setting systems like TeX and Lout but also in web browsers. For example, boxes can be aligned horizontally or vertically, spread over available space and recursively composed. Are there papers or implementations out there that implement such systems outside of type setting? Is there a consensus how such systems are best described?
Edit: The Haskell Diagrams package looks like a pretty thorough approach for composing graphical objects. It can deal with any shape, not just boxes. It's weak spot seems to be dealing with text which is very important for type setting, web browsers, and GUIs.
Edit: Coffins in LaTeX3 look relevant.
I just read Scott Hansleman's post on Guided View Technology in comics
and I though that this would be awesome if implemented in other avenues (specifically in manga )
I mean reading right to left in itself can be a little weird to start with and this would lower the barrier to entry for new readers.
I was wondering if there was possibly an open source project out there in the wild or if not then possibly a means to get started with something like this as I am not an image processing guru. In particular I just really would need to figure out which lines are panels and where to slice into smaller pictures. Because comics all have their own prefs as far as line thickness I'm not sure if there is a simple way to do this that works across many different border thicknesses and styles. Language doesn't matter so much here, I'm really about dealing with concepts and patterns of attack.
You can start by looking at the Duda-Hart implementation of the Hough transform for lines.
http://en.wikipedia.org/wiki/Hough_transform
The Hough algorithm will yield equations for straight lines. From that you can find intersections, identify rectangles, etc.
You can also use a kernel-based corner detection to find T-, L-, and X-intersections.
http://en.wikipedia.org/wiki/Corner_detection
One difficulty is that some panels in comics won't have "hard" edges, or may have edges that are squiggly, circular/elliptical, French curvy, etc. You can find particular algorithms for particular problems, but it would be hard to generalize these algorithms in a set of rules and programmatic logic that will work for all (or even most) samples. It seems that a hallmark of a good comic could be considered to be the elegant and sometimes surprising panelization, "surprising" being a synonym for unpredictable. Although there are many methods to "segment" an image into different regions, this is still an active area of research.
But if you start with Hough lines you'll have a good start and learn a lot about image processing.
I 'm trying to find an efficient way of acceptable complexity to
detect an object in an image so I can isolate it from its surroundings
segment that object to its sub-parts and label them so I can then fetch them at will
It's been 3 weeks since I entered the image processing world and I've read about so many algorithms (sift, snakes, more snakes, fourier-related, etc.), and heuristics that I don't know where to start and which one is "best" for what I'm trying to achieve. Having in mind that the image dataset in interest is a pretty large one, I don't even know if I should use some algorithm implemented in OpenCV or if I should implement one my own.
Summarize:
Which methodology should I focus on? Why?
Should I use OpenCV for that kind of stuff or is there some other 'better' alternative?
Thank you in advance.
EDIT -- More info regarding the datasets
Each dataset consists of 80K images of products sharing the same
concept e.g. t-shirts, watches, shoes
size
orientation (90% of them)
background (95% of them)
All pictures in each datasets look almost identical apart from the product itself, apparently. To make things a little more clear, let's consider only the 'watch dataset':
All the pictures in the set look almost exactly like this:
(again, apart form the watch itself). I want to extract the strap and the dial. The thing is that there are lots of different watch styles and therefore shapes. From what I've read so far, I think I need a template algorithm that allows bending and stretching so as to be able to match straps and dials of different styles.
Instead of creating three distinct templates (upper part of strap, lower part of strap, dial), it would be reasonable to create only one and segment it into 3 parts. That way, I would be confident enough that each part was detected with respect to each other as intended to e.g. the dial would not be detected below the lower part of the strap.
From all the algorithms/methodologies I've encountered, active shape|appearance model seem to be the most promising ones. Unfortunately, I haven't managed to find a descent implementation and I'm not confident enough that that's the best approach so as to go ahead and write one myself.
If anyone could point out what I should be really looking for (algorithm/heuristic/library/etc.), I would be more than grateful. If again you think my description was a bit vague, feel free to ask for a more detailed one.
From what you've said, here are a few things that pop up at first glance:
Simplest thing to do it binarize the image and do Connected Components using OpenCV or CvBlob library. For simple images with non-complex background this usually yeilds objects
HOwever, looking at your sample image, texture-based segmentation techniques may work better - the watch dial, the straps and the background are wisely variant in texture/roughness, and this could be an ideal way to separate them.
The roughness of a portion can be easily found by the Eigen transform (explained a bit on SO, check the link to the research paper provided there), then the Mean Shift filter can be applied on the output of the Eigen transform. This will give regions clearly separated according to texture. Both the pyramidal Mean Shift and finding eigenvalues by SVD are implemented in OpenCV, so unless you can optimize your own code its better (and easier) to use inbuilt functions (if present) as far as speed and efficiency is concerned.
I think I would turn the problem around. Instead of hunting for the dial, I would use a set of robust features from the watch to 'stitch' the target image onto a template. The first watch has a set of squares in the dial that are white, the second watch has a number of white circles. I would per type of watch:
Segment out the squares or circles in the dial. Segmentation steps can be tricky as they are usually both scale and light dependent
Estimate the centers or corners of the above found feature areas. These are the new feature points.
Use the Hungarian algorithm to match features between the template watch and the target watch. Alternatively, one can take the surroundings of each feature point in the original image and match these using cross correlation
Use matching features between the template and the target to estimate scaling, rotation and translation
Stitch the image
As the image is now in a known form, one can extract the regions simply via pre set coordinates
Can anyone clarify if the GDI StretchBlt function for the workstation Win32 API performs bilinear interpolation for scaling to both larger and smaller images for 24/32-bit color images? And if not, is there a GDI (not GDI+) function that does this?
The SetStretchBltMode fn has a setting HALFTONE which is documented as follows:
HALFTONE
Maps pixels from the source rectangle into blocks of pixels in the destination rectangle. The average color over the destination block of pixels approximates the color of the source pixels.
I've seen references (see follow-up to first answer) that this performs bilinear interpolation when scaling down an image, but no clear answer of what happens when scaling up.
I have noticed that the Windows Mobile CE SDK does support a BILINEAR flag - which is documented exactly opposite of the HALFTONE comments (only works for scaling up).
Note that for the scope of this question, I'm not interested in pursuing GDI+ (which has numerous interpolation options), OpenGL, DirectX, etc. as alternatives, so please don't bother with follow-ups regarding these other APIs or alternate image libraries.
What I'm really hoping to find is some definitive MS/MSDN or other high-quality documentation that clearly documents this behavior of the Win32 (desktop) GDI behavior.
Meanwhile, I'll try some experiments comparing GDI vs. Direct2D (which does have an explicit flag to control this) and post my findings.
Thanks!
I've been looking into this same problem for the past couple of weeks.
As far as I can tell, there does not exist any definitive documentation on this behaviour from Microsoft.
However, I've run some tests myself, to try and establish the degree to which StretchBlt can be trusted to perform consistently with respect to up- and down-scaling images in halftone mode.
My findings are:
1) StretchBlt does produce adequate quality up- and down-scaled images. It might be a touch below Photoshop quality, but probably OK for most practical purposes.
2) It seems to depend upon hardware acceleration, whenever it's available. I haven't been able to confirm this, but I have a slight fear that this may lead to different outputs on different types of hardware. However, on the 5 or 6 different systems I've tried it on, old and new, the performance has been consistent and fast.
3) If you use the call on a 16-bit color device, or lower, StretchBlt will automatically dither your image. If you run it on a 24-bit color device, it will not dither.
4) If you use it to scale small images (smaller than 150x150px), it will randomly fall back to nearest neighbour interpolation. This can be remedied in your own software, by padding the bitmap before scaling, doing StretchBlt on it, and then removing the padding afterwards. Kind of a hack, but it works.
HALFTONE mode performs a very blocky halftone dithering on the image, based on varying the conversion thresholds over a defined square. I have never seen a situation where it would be considered the best choice.
COLORONCOLOR is the best mode for color images, but as you've seen it doesn't give great results.
GDI does not support a bilinear mode (except in Windows Mobile CE as you discovered). The naive implementation of bilinear does not do very well when shrinking an image, as it simply tries to interpolate between two adjacent input pixels without trying to draw from a larger area.
In a multi-touch environment, how does gesture recognition work? What mathematical methods or algorithms are utilized to recognize or reject data for possible gestures?
I've created some retro-reflective gloves and an IR LED array, coupled with a Wii remote. The Wii remote does internal blob detection and tracks 4 points of IR light and transmits this information to my computer via a bluetooth dongle.
This is based off Johnny Chung Lee's Wii Research. My precise setup is exactly like the graduate students from the Netherlands displayed here. I can easily track 4 point's positions in 2d space and I've written my basic software to receive and visualize these points.
The Netherlands students have gotten a lot of functionality out of their basic pinch-click recognition. I'd like to take it a step further if I could, and implement some other gestures.
How is gesture recognition usually implemented? Beyond anything trivial, how could I write software to recognize and identify a variety of gestures: various swipes, circular movements, letter tracing, etc.
Gesture recognition, as I've seen it anyway, is usually implemented using machine learning techniques similar to image recognition software. Here's a cool project on codeproject about doing mouse gesture recognition in c#. I'm sure the concepts are quite similar since you can likely reduce the problem down to 2D space. If you get something working with this, I'd love to see it. Great project idea!
One way to look at it is as a compression / recognition problem. Basically, you want to take a whole bunch of data, throw out most of it, and categorize the remainder. If I were doing this (from scratch) I'd probably proceed as follows:
work with a rolling history window
take the center of gravity of the four points in the start frame, save it, and subtract it out of all the positions in all frames.
factor each frame into two components: the shape of the constellation and the movement of it's CofG relative to the last frame's.
save the absolute CofG for the last frame too
the series of CofG changes gives you swipes, waves, etc.
the series of constellation morphing gives you pinches, etc.
After seeing your photo (two points on each hand, not four points on one, doh!) I'd modify the above as follows:
Do the CofG calculation on pairs, with the caveats that:
If there are four points visible, pairs are chosen to minimize the product of the intrapair distances
If there are three points visible, the closest two are one pair, the other one is the other
Use prior / following frames to override when needed
Instead of a constellation, you've got a nested structure of distance / orientation pairs (i.e., one D/O between the hands, and one more for each hand).
Pass the full reduced data to recognizers for each gesture, and let them sort out what they care about.
If you want to get cute, do a little DSL to recognize the patterns, and write things like:
fire when
in frame.final: rectangle(points)
and
over frames.final(5): points.all (p => p.jerk)
or
fire when
over frames.final(3): hands.all (h => h.click)
A video of what has been done with this sort of technology, if anyone is interested?
Pattie Maes demos the Sixth Sense - TED 2009
Most simple gesture-recognition tools I've looked at use a vector-based template to recognize them. For example, you can define right-swipe as "0", a checkmark as "-45, 45, 45", a clockwise circle as "0, -45, -90, -135, 180, 135, 90, 45, 0", and so on.
Err.. I've been working on gesture recognition for the past year or so now, but I don't want to say too much because I'm trying to patent my technology :) But... we've had some luck with adaptive boosting, although what you're doing looks fundamentally different. You only have 4 points of data to process, so I don't think you really need to "reduce" anything.
What I would investigate is how programs like Flash turn a freehand drawn circle into an actual circle. It seems like you could track the points for duration of about a second, and then "smooth" the path in some fashion, and then you could probably get away with hardcoding your gestures (if you make them simple enough). Otherwise, yes, you're going to want to use a learning algorithm. Neural nets might work... I don't know. Just tossing out ideas :) Maybe look at how OCR is done too... or even Hough transforms. It looks to me like this is a problem of recognizing shapes more than it is of recognizing gestures.
I'm not very well versed in this type of mathematics, but I have read somewhere that people sometimes use Markov Chains or Hidden Markov Models to do Gesture Recognition.
Perhaps someone with a little more background in this side of Computer Science can illuminate it further and provide some more details.