SVG lines disappearing on scaling down - firefox

I am using elements of width 1.5 to connect elements. This all looks fine but if the image is dynamically scaled down to show more of the diagram then some of the lines eventually disappear in Firefox but not in Chrome. Is this a problem in the Firefox SVG support, or some difference in default settings that I can change?
Rather than present any code, I can point to an example in a blog-post at: https://parallax-viewpoint.blogspot.com/2018/12/further-travels-of-walking-boots.html. Scroll to the bottom of the post and there's an embedded SVG "family tree" image that uses the JavaScript pan-zoom library to dynamically scale the diagram.
On the default setting, in Firefox, not all lines will show -- probably dependent upon whether they fall on a pixel boundary or not. As the image is gradually scaled up, you will notice that the lines will eventually all show. Contrast this with Chrome where the lines always appear to show.
New information from Firefox support: there is a pronounced "redraw" fluctuation for all transformations (+ or -) from the initial position. The lines are markedly thinner after the redraw, and this suggests some sort of rounding error is occurring.

The suggestion of using double lines (one with non-scaling-stroke and one without) runs into a few issues. If the line elements have stroke-opacity less then 1 then the superposition shows through. Also, with SVG 1.1, it does not appear to be possible to encapsulate the drawing of two lines in a single group/symbol if the line lengths are related but not equal. Unequal lengths are necessary to prevent corner gaps where orthogonal lines don't reach fully into the corners. Using calc() to adjust one length relative to a 100% length doesn't work with line elements because they use pairs of coordinates rather than lengths. I tried using a rect in place of a line but Firefox did not support CSS control over its width/height in order to use calc().
The solution was simply to use opaque lines (stroke-opacity=1.0), suggesting that the disappearance of the lines was due to Firefox's handling of stroke-opacity < 1.0. It may be the case that lines will eventually disappear at minute scales, but the reported problem was specifically that Firefox lost them much earlier than it should, and compared the situation with Chrome (which worked OK).

Related

DirectWrite renders issues and metric scaling inaccuracy

I have DirectWrite setup to render single glyphs and then shape them programmatically based on the glyph run and font metrics. (Due to my use case, I can't store every full texture in an OpenGL texture otherwise it's essentially a memory leak. So we store each glyph into one texture to lay them out glyph by glyph.)
However, I have two issues:
Inconsistent rendering results.
Scaling metrics leads to inconsistent distances between glyphs.
These results are are transferred to a bitmap using Direct2D and WIC bitmap (CreateWicBitmapRenderTarget).
Let's look at an example, font size 12 with Segoe UI.
Full string rendered 1st line is rendered using DrawTextLayout drawn with D2D1_DRAW_TEXT_OPTIONS_ENABLE_COLOR_FONT. 2nd line is drawn with each Glyph using DrawGlyphRun with DWRITE_MEASURING_MODE_NATURAL. 3rd is rendered with paint.net just for reference.
This leads to the second issue, the distance between each letter can be off. I am not sure if this is a symptom of the previous issue. You can see the distance between s and P is now 2 pixels when drawn separately. Because i is no longer 3 pixels wide, it visually looks too close to c now when zoomed out. p and e look too close.
I have checked the metrics, and I am receiving the right metrics from the font from shaping. Above string metrics from DirectWrite : [1088.0, 1204.0, 1071.0, 946.0, 496.0, 1071.0, 869.0]. I am comparing output with Harfbuzz: [S=0+1088|p=1+1204|e=2+1071|c=3+946|i=4+496|e=5+1071|s=6+869] which is correct.
To convert to DIP I am using this formula for the ratio multiplier: (size * dpi) / 72 / metrics.designUnitsPerEm
So with a default DPI of 96 and default size of 12 we get the following ratio: 0.0078125.
Let's look at S is 1088. So the advance should be 1088 * 0.0078125 = 8.5. Since we can't write between half a pixel, which way do we go? I tried every value from the lsb, to the advance, to render offset in every combination of flooring, ceiling, rounding, converting to int. Whichever way I choose, even if it fixes it for one situation, I'll test with another font, or another size, it will be one or two pixels too close in another string. I just can't seem to find a proper balance that is universal.
I am not really sure where to go from here. Any suggestions are appreciated. Here is the code: https://github.com/pyglet/pyglet/blob/master/pyglet/font/directwrite.py#L1736
EDIT: After a suggestion of using DrawGlyphRun using the full run, it does appear exactly what the DrawTextLayout outputs. So the DrawGlyphRun should produce the same appearance. Here's where it gets interesting:
Line1: DrawTextLayout
Line2: Single glyphs drawn by DrawGlyphRun
Line3: All glyphs drawn using DrawGlyphRun
You can see something interesting. If I render each 'c' by itself (right side), you can see that it has 4 pixels on the left of the character by itself. But in the strings it looks like it's missing. Actually, taking a deeper look, and a color dropper, it appears the color is indeed there, but it's much darker. So somehow each glyph is affecting the blend of the pixels around it. I am not really sure how it's doing this.
EDIT2: Talking with another, I think we narrowed this down to anti-aliasing. Applying the antialias to the whole string vs each character produces a different result. Setting D2D1_TEXT_ANTIALIAS_MODE_ALIASED each character looks and appears exactly the same now compared to both.

MacOS CALayer Character Spacing

I have a problem which I hope you can help me solving.
I'm creating a program using Xamarin.Mac (C# for Mac) and I need to draw a DNA sequence (ATGC and so on). However, I need to know the exact position of each character so I can draw several other objects which should be aligned with the characters in DNA sequence.
Screenshot of the Windows version of my app which illustrates the behavior I'm looking for:
Currently I'm looking to use the CALayer drawing method, which appears to be fast enough to render 12 lines of 70 characters in less than 50 ms. CALayers are not fast enough to render 1000 CATextLayers with one (A/T/G/C) character each, so (I think) I need to render them as lines with specific spacing. This means that I need to have exactly 10 (example) pixels between the center of each character.
However, I cannot find a way to do this.
The NSAttributedString Kerning seems be added to an unknown existing tracking (or spacing) of the font, and thus may be used with monospace fonts but still results in an unknown actual spacing.
I CAN get around the issue by trial and error until the letter spacing appears to match the desired spacing, but I'm not very confident in robustness of this method across different devices (screen resolutions). This requires that I use a monospace font, which is okay, but not optimal.
Is it possible to have specific character spacing using a single CATextLayer and what are my options if not? is it possible to have 1000 characters drawn individually without a huge performance impact?
Thank you.

Rectangle detection in image

I'd like to program a detection of a rectangular sheet of paper which doesn't absolutely need to be perfectly straight on each side as I may take a picture of it "in the air" which means the single sides of the paper might get distorted a bit.
The app (iOs and android) CamScanner does this very very good and Im wondering how this might be implemented. First of all I thought of doing:
smoothing / noise reduction
Edge detection (canny etc) OR thresholding (global / adaptive)
Hough Transformation
Detecting lines (only vertically / horizontally allowed)
Calculate the intercept point of 4 found lines
But this gives me much problems with different types of images.
And I'm wondering if there's maybe a better approach in directly detecting a rectangular-like shape in an image and if so, if maybe camscanner does implement it like this as well!?
Here are some images taken in CamScanner.
These ones are detected quite nicely even though in a) the side is distorted (but the corner still gets shown in the overlay but doesnt really fit the corner of the white paper) and in b) the background is pretty close to the actual paper but it still gets recognized correctly:
It even gets the rotated pictures correctly:
And when Im inserting some testing errors, it fails but at least detects some of the contour, but always try to detect it as a rectangle:
And here it fails completely:
I suppose in the last three examples, if it would do hough transformation, it could have detected at least two of the four sides of the rectangle.
Any ideas and tips?
Thanks a lot in advance
OpenCV framework may help your problem. Also, you can look to this document for the Android platform.
The full source code is available on Github.

In Win7 some fonts don't work like they did in Win2K/XP

My question is about how font handling needs to be changed in order to work correctly under Windows 7. I'm sure that I've made an assumption about something that was valid before, but is no longer valid. But I don't even know where to begin looking! I'm praying someone can help! Here are the details as I understand them (I've also posted this question on a Microsoft Windows Developers forum, but they're not answering):
Yes, I'm behind the times (heck, I still write WIN32 code in plain C!) I have a 10 yr old DLL I wrote that mimics an even older DOS screen I/O library within the client area of a window. Needless to say, it only allows the use of fixed-width fonts. When some of the programs using the DLL have been moved to Windows 7, there is a strange flickering that appears when a fixed-width TRUE TYPE font is used (bitmap fonts still work perfectly.) We've tracked the problem down to the fact that a single character written with ExtTextOut is wider than it should be. I've checked the measurements three different ways (by using GetTextExtentPoint32 on a 132 character string and dividing by 132, by calling GetTextMetrics and even by using GetCharABCWidths for all 256 characters) and they all agree that the font is the same width. But ExtTextOut is rendering the background rectangle one or two pixels wider than the font width. Either than, or it is beginning the background rendering a pixel or two to the left of the position given in the parameters [I call it like this: ExtTextOut( hdc, r.left, r.top, ETO_OPAQUE, &r, &ch, 1, NULL ).] And remember, this EXACT code worked perfectly under Windows 2000, Windows XP and, with bitmap fonts on Windows 7 -- but it no longer works correctly with fixed-width true type fonts under Windows 7.
For anyone who isn't grasping what I need to do: try to imagine writing one character per square on a piece of graph paper. Every square uses the same font, but may have a different foreground and/or background color. I use TA_TOP|TA_LEFT text alignment, because it is the simplest and any consistently applied alignment should work for a fixed-width font.
What I'm seeing is that ExtTextOut is emitting a larger background rectangle than I've specified in the RECT * parameter. Since the rectangle I'm providing is created from the reported size of the font, this should NEVER happen -- and it never happened on Windows XP and earlier, and doesn't happen with bitmap (i.e. .FON) fonts under Windows 7, either. But it ALWAYS happens with fixed-width TrueType fonts under Windows 7. This is with the EXACT SAME EXECUTABLE running on Windows 2000, Windows XP and Windows 7 (32 & 64.) While I would love to simply say Windows 7 has a bug, I'm more inclined to believe that some fundamental assumption that I've made about font handling under Windows is no longer true (after 20 years of writing software for Windows.)
But I have no idea how or where to discover what that might be! Please, PLEASE help me!
--- ammendment ---
For anyone interested, I've managed to work around what I am considering a bug -- until I find documentation to the contrary. My workaround consists of two changes to my library:
Use the size returned from GetTextExtentPoint32() of an 'X' instead
of data from TEXTMETRICS.
Include the ETO_CLIPPING flag in all ExtTextOut() calls.
Previously, I was using tmHeight+tmExternalLeading for the number of pixels between the tops of consecutive rows of text, as is documented. I discovered that the size.cy value coming back from the GetTextExtentPoint32() wasn't the same and seemed more accurate. The worst example I found was the OCRB true type font. Here's what I saw in the debugger for the OCRB font I'd created (using the system font selection dialog):
ocrbtm.tmHeight = 11
ocrbtm.tmExternalLeading = 7
ocrbsize.cy = 11
So, for some reason that I've yet to discover, Windows is ignoring the external leading value defined for the OCRB font. Using the size value instead of the TM results in nice, neat, close packed text, which is just what I wanted.
The ETO_CLIPPING flag should not be necessary for me because I am setting the rectangle to exactly the dimensions of a single character and using ETO_OPAQUE to fill in the background (and overwrite the previous cell content.) But without the clipping flag, a single character is wider than either the size, text metric, or ABC width would indicate -- at least, that is true based on all of the documentation I've found so far.
I believe that HEIGHT issue has existed for a long time, but the rest was unnecessary until we ran our software under Windows 7. I'm appending this to my question to see if anyone can explain what I obviously don't understand.
-- ammendment 2 --
1: All documentation I can find says that tmHeight+tmExternalLeading should produce single spaced lines of text. Period. But that is not always true and I cannot find documentation indicating how Windows determines the different values that are sometimes returned by GetTextExtentPoint32().
2: under Win7 (maybe Vista) ExtTextOut started filling in a little more background than it should (by adding a couple extra pixels to the right), but only when a true type font is selected. It does this even if the rectangle is double the expected size of the character (in BOTH dimensions.) DPI/Scaling might be a factor, but since my system is set to 100%, it would seem that Windows is having trouble with a 1:1 scaling factor and that would seem to be a bug. The fact that it only affects true type and not bitmap (.FON) fonts also seems to rule out scaling (unless there is a bug in the scaling system), since Windows should attempt to scale all of the text, not just some of it. Also, there's a greyed (but checked) setting "Use Windows XP style DPI scaling" in the "Custom DPI Setting" dialog. Lastly, this entire issue may be a result of my running under the Windows Classic theme instead of one of the Aero or other Win7 native themes.
-- ammendment 3 --
Simply calling SetProcessDPIAware() has no effect on the issue I'm having. Since my problem exists at the 100% DPI setting (scale 1:1), if my problem is DPI-related, then I must have discovered a bug in the DPI virtualization because this is how Microsoft describes the feature:
This feature works by providing "virtualized" system metrics and UI elements to the application, as if it were running at 96 DPI. The application then renders to a 96-DPI off-screen surface, and the Desktop Windows Manager scales the resulting application window to match the DPI setting.
All of my settings show that I'm at 100% scaling, and looking in the custom settings box clearly shows that means 96 DPI. So, if the DPI virtualization from 96 DPI to 96 DPI is not working for my fixed-width true type fonts, then Windows has a problem, right? Or is there some function I need to call (or stop calling?) in order allow the DPI virtualizer to work correctly?
I'm still not convinced that the supposed scaling issue actually has as much to do with the font SIZE as I originally thought. That's because the problem is manifesting in the background rectangle being filled by ExtTextOut() instead of the text character being emitted. The background rectangle gets enlarged a bit when the font is true type. I've also now verified that this problem occurs whether using the Windows Classic theme or the standard Windows Aero theme. Now to build a simplified example so others can experiment with it.
-- ammendment 4 --
I've created a minimal demo program that shows what I'm seeing (and what I'm doing.) The Visual Studio 2010 project/source may be downloaded from http://www.svalli.com/files/fwtt.7z -- I intentionally didn't include executables because I don't want to risk spreading malware. The program has you choose a fixed-width font and then writes two 5x5 character grids to the client area, one created using the GetTextExtentPoint32 size and one using the TEXTMETRIC size as documented by Microsoft. The grids are in a black&white checkerboard pattern with a yellow on red character written last into the center to show the overlap effect (you may need a zoom utility to see it clearly.) The program also draws a string that starts with 5 X's just below the grid, starting at the same left offset, to be used as a comparison for my method of placing individual characters (I match the string.) The menu allows toggling clipping on/off in ExtTextOut and selection of other fonts. There is also a command line option dpiaware (case-sensitive) that causes the program to call SetProcessDPIAware() when it starts up, so that the effect of that call may also be evaluated.
From creating this I've learned that ExtTextOut is filling the correct background rectangle, but the character being rendered with an opaque background may be wider than it should be and may not even begin where ExtTextOut was told to begin drawing! I said "should be" because the character spacing I'm ending up with matches what I get when I have ExtTextOut render a whole string. The overlap may apparently be on either or both sides of the given rectangle, for example, OCRB adds an extra pixel to both the left and right sides of the character cell while the other true type fonts I've checked add two pixels to the right edge.
I really want to do this the "right" way, but I cannot find any documentation that shows what I'm doing wrong or am missing. Well, I am probably missing something for DPI Aware at scales other than 100%, but otherwise, I'm just baffled.
-- ammendment 5 --
Slightly less baffled... the problem is caused by ClearType. Turning off ClearType made all of the fonts work again. Turning ON ClearType under XP causes the same problem. Apparently ClearType can silently (until someone tells me how to detect it) stretch characters horizontally by a couple pixels in order to make space for the shaded pixels it adds to smooth things out.
Is clipping the only way around this problem?
-- ammendment 6 --
Partial answer to my clipping question above: When creating a new font I now do the following (in pseudo code):
CreateFontIndirect
SelectFont
GetTextMetrics
if( (tmPitchAndFamily & TMPF_TRUETYPE) && Win6.x or above )
if( SystemParametersInfo( SPI_GETCLEARTYPE ) )
lfQuality = NONANTIALIASED_QUALITY
DeleteObject( font )
CreateFontIndirect
Without enabling clipping this almost always works with the font sizes I'm using, though I've found a few that still render an extra pixel to the right (or left) of the character cell. Luckily, those appear to be free fonts found on the internet, so their overall quality might be below the standards of professional font foundries.
If anyone can find a better answer, I'd really, REALLY love to hear it! Until then, I think this is as good as it will get. Thanks for reading this far!
Make sure your code is high DPI aware, and then tell the OS that your process is DPI aware.
If you don't tell the OS that you're DPI aware, some of the measurement functions will lie and give you numbers based on the assumption that the display DPI is actually 96 dpi regardless of what it really is. Meanwhile, the drawing functions will try to scale in the other direction. For simple high-level drawing, this approach generally works (though it often leads to fuzzy text). For small measurements and precise placement of individual characters, this often results in round off problems that lead to things like inconsistent font sizes. This behavior was introduced in Windows Vista.
You can see it all the time in Visual Studio 2010+ as the syntax highlighter colors the text and words shift by a couple pixels here and there as you type. Really frickin' annoying.
Regarding the amendment:
tmExternalLeading is simply a recommendation from the font designer as to how much extra space to put between lines of text. MSDN documentation typically says, "the amount of extra leading (space) that the application adds between rows." Well, you're the application, so the "Right Thing To Do" is to add it between rows when you're drawing text yourself, but it really is up to you. (I suspect higher level functions like DrawText will use it.
It is perfectly correct for GetTextExtentPoint32 (and friends) to return a size.cy equal to tmHeight and to ignore tmExternalLeading. As the programmer, it's ultimately your choice how much leading to actually use.
You can see that this with some simply drawing code. Select a font with a non-zero tmExternalLeading (Arial works for me). Draw some text using TextOut and a unique background color. Then measure the text with GetTextExtentPoint32 and draw some lines based on the values you get back. You'll see that the background color rectangle excludes the external leading. External leading is just that: external. It's not in the bounds of the character cell.
// Draw the sample text with an opaque background.
assert(::GetMapMode(ps.hdc) == MM_TEXT);
assert(::GetBkMode(ps.hdc) == OPAQUE);
assert(::GetTextAlign(ps.hdc) == TA_TOP);
COLORREF rgbOld = ::SetBkColor(ps.hdc, RGB(0xC0, 0xFF, 0xC0));
::TextOutW(ps.hdc, x, y, pszText, cchText);
::SetBkColor(ps.hdc, rgbOld);
// This vertical line at the right side of the text shows that opaque
// background is exactly the height returned by GetTextExtentPoint32.
SIZE size = {0};
if (::GetTextExtentPoint32W(ps.hdc, pszText, cchText, &size)) {
::MoveToEx(ps.hdc, x + size.cx, y, NULL);
::LineTo(ps.hdc, x + size.cx, y + size.cy);
}
// These horizontal lines show the normal line spacing, taking into
// account tmExternalLeading.
assert(tm.tmExternalLeading > 0); // ensure it's an interesting case
::MoveToEx(ps.hdc, x, y, NULL);
::LineTo(ps.hdc, x + size.cx, y); // top of this line
const int yNext = y + tm.tmHeight + tm.tmExternalLeading;
::MoveToEx(ps.hdc, x, yNext, NULL);
::LineTo(ps.hdc, x + size.cx, yNext); // top of next line
The gap between the bottom of the colored rectangle and the top of the next line represents the external leading, which is always outside the character cell.
OCR-B is designed for reliable optical character recognition in banking equipment. Having a large external leading (relative to the height of the actual text) may be appropriate for some OCR applications. For this particular font, it's probably not an aesthetic choice.

Detecting empty pages in scanned documents

So we need to detect whether an image, created by a scanner, represents an empty page. I'm way out of my depth when it comes to image processing, so I have to run this by the community.
Here's what I have come up with so far:
Empty pages can be glaringly white, gray recycled paper, or yellowed old paper. The current idea is to create a histogram for a page, look for a steep increase of the curve, and get the percentage of pixels are darker than that. If that exceeds a threshold, the page is likely not empty.
Since this would likely classify a page containing a single line of text at the top as empty, we would tile the page and gather statistics about each tile.
We would need to detect scanned staplers and holes from binding (likely only in certain tiles), but this can be put off to some later stage. However, if you have an idea what to look out for besides these two, please mention it in a comment.
This needs to be fast. It's part of a document processing workflow that processes (tens of) thousands of pages per day. If processing a page takes ten seconds longer, than our customers will have to tell their customers that they'll have to wait several days longer for their results. (If this results in more false positives, some customers would rather have someone check a few dozen found "empty" pages, than have their customer wait one more day.)
So here's my questions:
Is it a good idea to take this route or is there something better?
If we do it this way, how would I do this? What's a good (cheap) algorithm for finding a threshold for a page? Could we gain significant speed by assuming a similar threshold for a batch of documents? To which precision could brightness values be rounded, before getting logged? What quirks could we expect?
If you know that a scanned page is going to fill the image entirely, then calculating the standard deviation might be a good way of doing this.
I would suggest blurring page slightly to reduce some noise. Then calculate the SD for the page, in theory, a page the is more or less all one colour will have a low SD and one with lots of text will have a higher SD. Then it's a case of 'training' the system to work out when a page is plain and when it is text. You might find that certain pages are hard for it to tell.
You could have it trained by having it process a vast number of pages, and it goes through them all, and you say if it is plain or not.
EDIT
ok, a white page with black text, if we have just the page and no surrounding stuff, will have a mean colour of grey, probably a fairly light grey. Getting the average is a for loop through all the pixels, adding their values and then dividing by the number of pixels. I'm not good with this o(logN) stuff, but suffice to say, it will not that long. Unless you have HUGE images.
SD is a second for loop, this time we are counting up how different each pixel is from the mean, and then dividing by the mean. This will take a bit longer then the mean, as we have to do something like
diff = thispixel - mean;
if(diff < 0) {
diff = -diff;
}
runningTotal += diff;
For a plain coloured page, each pixel will be close to the mean value, thus our SD will be low. If the SD is below a certain value, we can assume that this means the page is all one colour.
This might have problems if their is very minimal amount of text, as it will not have a large influence on the SD, so maybe like you suggested in the question, break the page into sections. I suggest strips horizontally, as text tends to go this way. If we do one of these strips one at a time, once one strip suggests it has text, we can stop as we don't care if the rest is blank or not.
Blurring the page will help reduce noise, as the odd pixel of noise will be reduced in its impact, thus give you a 'tighter' SD. You could also use it to reduce the resolution of your image.
Say you sauce image is 300 wide by 900 high, you could sample pixels in blocks of nine, 3 *3, and thus end up with an image that is 100 wide by 300 high, so it can actually be used to reduce the amount of calculations you need to do, in this case by a ninth!
The main problem is going to be in working out how high an SD can be with just a plain page. Maybe have it find the SD of a load of blank pages.
By the sounds of it, you are probably going to want to have a middle ground that lets it be unsure and ask for human intervention, possibly letting the human value train the system to get better?
Perform some sort of simple edge detection. If the number of pixels constituting edges is below some threshold, then there's going to be a high probability the page is empty. This could be improved by classifying certain edges that correspond with high certainty (by shape and location) to punched holes and staples as trivial and discounting them from the metric.
When I worked for a document processor (~8 years ago), we handled client projects varying from very "clean" only-US-letter-sized pages to cover-/cardstock of irregular shapes mixed with normal pages. Operators fed pre-sorted files into scanning machines and only had to watch for folded corners and similar mechanical problems. Their output was multiple streams of hundreds of images corresponding to a range of files. A single scanner operator could easily scan 15k pieces of paper in a shift (that's only 0.60 pages/sec, while a scanner at speed could handle 3 pages/sec and still scan both sides). Later operators processed those looking for key pages to mark file start and end. (Image recognition can be used here, sometimes, but people also provide a quality check on the first operators.) We had many variables that could be set per client project.
I'm basing the rough outline below on that experience and how it appears that your goals and workflow are similar.
(Terminology: By client I mean our client, e.g. a specific bank. A project or client project is a set of documents from that client that contains many files, e.g. all mortgages handled by a specific branch in a given year. A file is a logical arrangement that would normally be a physical file folder for one of the client's customers, e.g. all mortgage papers for one address.)
Cut off the top, bottom, sides, and corners. Throw these out of your calculations (even though you'll probably store them in the final image). This will cover staple holes, binder holes, but also (tiny) folded corners and very minute torn edges which appear as black spots. Depending on how you're scanning, the last two may be less of a problem.
Vary the sizes of these cuts for each client project, as required. For example, even a very thin edge slice, say 1-2mm, will eliminate most ragged edges without increasing false positive rate.
Convert to black and white, 1 bit per pixel. I suspect you are already doing this for some client projects anyway, so doing this efficiently and effectively, which can be subtle, should be no extra work. (Even if you don't store the 1bpp image as the deliverable result, the conversion will be helpful in empty page detection.) Eliminate noise by dropping any black pixels with none or only one black neighbor (using all surrounding 8 as neighbors).
After cutting extremities (#1) and this simplistic noise reduction, blank pages will have a very low number of black pixels; most blanks will have no black pixels at all – exempting exceptionally poor page quality, inked stamps (when scanning back-sides, mentioned more below), or other circumstances across the whole project, and so forth.
Depending on client project, you may set hotspots to be watched – the converse of cutting off the sides. For example, watching a 1" strip where a single line at the top of the page would appear may reduce false positives. A low contrast scan or faded hardcopy (perhaps even pencil, which can be common on back-sides) with only one line of text will be distinguished from a blank page this way.
What sections are worth watching depends on each project, but do not try to divide the page up into tiles and then subdivide those tiles into areas of interest. Instead, parallelize this on the page level; e.g. 1 worker per core, each worker handles a full page at a time.
Depending on how you're keying individual files, you may find it helpful to drop blanks (before marking start-of-file pages, which is still often a manual process even at high volume) then watch for blank pages at unexpected points after files have been keyed (e.g. expected would be the last page of the file, without being two blanks in a row, etc.).
For example, if a particular project is only scanning one side of each page, then detecting two blank pages in a row is a good indication that a couple pages in a file were flipped upside-down (clients often hand over hardcopy files like this). Either the sorters (who remove things like staples and paperclips) or the first machine operators should have caught this mistake, but, regardless, it will now need a manual check to verify.
On the other hand, there were projects that had very clean files so sorters could insert (usually colored) blank pages marking file boundaries. In this case, the second set of people still did the keying by file number, but only had to examine the first page of each file. This wasn't rare, but not common either.
Before I start rambling a bit, I hope my main point comes across: you have to decide how to mitigate rates of false positives (= data loss) and false negatives (= annoying blanks and otherwise harmless, but a maximum allowed rate may still be specified in the project contract). That varies drastically by project and the type of files/documents you're handling, but it guides you in how to do the detection. You will get much better results from a tailored approach than trying one-size-fits-all, even if the tailored approaches are 80-98% similar.
If you're delivering 1bpp images to the client, for example, you might not even want/need to eliminate blanks as filesize (and ultimately size of the delivered dataset) won't be an issue. This can be an acceptable trade-off when eliminating most blanks is harder while maintaining a low false positive rate; such as for files with inked stamps ("received on", "accepted", "due date", etc.; they bleed through to the back) or other problems, for example.
My fall class does a bunch of image-processing projects.
Here's what I would try:
Project from color to grayscale
Pour all the pixels into a simple histogram with say 100 buckets between 0 and 1
Find a local minimum in the histogram such that the absoluete value of above - below is as small as possible, where above is the number of brighter pixels and below is the number of darker pixels
Force the above pixels to white and the below pixels to black
If you like, as an extra step you could remove black edges
If there are hardly any black pixels, the page is blank
The first two steps should be combined, and they are the only time-consuming steps; on a 600dpi images you may have to touch many millions of pixels. The rest will be lightning fast. I'd be very surprised if you can't classify multiple images per second—especially if you know there will be no black edges.
The only part that requires training or experiment is the last step. It's also possible that you will need to fiddle around with the number of buckets in the histogram; if there are too many buckets, you may have a bad local minimum.
Good luck, and report back to us how you make out!
Check out this line detection algorithm: http://homepages.inf.ed.ac.uk/rbf/HIPR2/linedet.htm. In addition to a detailed explanation of how the algo works there's a demo where you can use your own image and see the results. I tried two images: 1) a B&W scan of a receipt, 2) the B&W, "blank" back side of that same receipt. All of the edge detection algorithms I tried found edges on the "blank" page. But, this line detection algorithm was the only algorithm that correctly found lines on the front page and yet didn't find anything on the "blank" back page.
It looks as if you're trying to convert all paperwork for a company into digital documents. Some of this paper can be really old.
Say your text is black, and any other color is the background. If you take two weighted averages, one consisting of what you think is the text, and one consisting of the background, you can compare those two and see if they're distant enough to consider further evaluation. This will removing any uneven aging of the paper.
Staple holes and punched holes in paper are pretty standard in size, but they'd show up as gray or not at all if you're scanning on a white background. If not, then you can guess where these are and remove them.
Now, we look at areas of high interest, areas where the black pixels are the most dense. Select a portion of that and OCR it. Place the starting top-left closest to an area where text begins. On a typical document, a solid blank linear area going left-to-right and another going top-to-bottom denotes the top and left sides of a paragraph. You can be sure that you got a line of text because below a line of text is another blank left-to-right area. So you don't need to worry about selecting a portion that will chop text in half.
You could take the mean gray level (integer) of each few rows of the scanned image (depending on the resolution and how many lines are required to capture one line of text), then consider the spread of row means. If there is no text on the page, the spread of means should be small (i.e. background ranges from 250-255), and if there is text on the whole page or on part of the page, the spread would be much larger (i.e. 15 for text to 250 for background).
Seems to me like the solution should be computationally simple due to the large number of pages to check. Approaches requiring further processing (edge detection, filtering, etc) seem like overkill, and will take much longer to run.
There is no need to process pixel by pixel, using matrices will help this be more efficient, for example using Numpy you can calculate means, sums, etc. for entire rows, columns or matrices at once much more efficiently. There is also no need to process EVERY pixel, a good sample of rows should be able to accomplish the task with similar accuracy. 8bit accuracy should be fine, and you could even resample to large pixels before running this processing algorithm.
You can do a noisy trim, i.e. blur the image and do an auto-trim (without actually modifying the image). If the width or height of the trim result is below a threshold (e.g. 80 to 100 for a 600 dpi image) then the page is empty.
A proof of concept using the ImageMagick command line front-end:
$ convert scan.png -shave 300x0 -virtual-pixel White -blur 0x15 -fuzz 15% \
-trim info:
The above command assumes a 600 dpi DIN A4 black and white (1 Bit) image. It also ignores a margin of 300 pixels such that artifacts like perforation holes don't yield false negatives.

Resources