Find updated rectangles in image - algorithm

I need to find which rectangular regions were updated between two images. E.g., I have these images:
first http://storage.thelogin.ru/stackoverflow/find-updated-rectangles-in-image/1.png second http://storage.thelogin.ru/stackoverflow/find-updated-rectangles-in-image/2.png
Imagemagick's compare tells me this pixels were updated:
compare http://storage.thelogin.ru/stackoverflow/find-updated-rectangles-in-image/3.png
So I need to repaint this regions (have outlined first of them):
compare http://storage.thelogin.ru/stackoverflow/find-updated-rectangles-in-image/4.png
Repainting is done over slow connection (57600 baud), so number one priority is data size (one byte for magic word, one byte for checksum, six bytes for region coordinates, two bytes for each pixel). Which algorithm can I use to find these regions? I think, something like that is used in vnc and similar software.

As far as actually finding the regions which have changed, as ImageMagick has done for you, you can compute a pixel by pixel difference (e.g. XOR). Regions with a difference of 0 have not changed.
It is not clear from your question whether the painting itself is slow or just the transmission of the repainting data. It is also not clear what kind of encoding/decoding can be done on the other end of the transmission. Do you have to send your data as you specified or can you encode it in another way if you wish?
Your data packets have a 8 byte overhead per rectangle "(one byte for magic word, one byte for checksum, six bytes for region coordinates, two bytes for each pixel)". I take it from the two bytes for each pixel that the color depth is 16-bit? So, due to the overhead, some of the smallest rectangles you outlined actually cost you more than combining them with other rectangles and resending some data on non-updated regions.
The actual problem of finding rectangles where each has an overhead is analogous to the "Strawberry Fields" hiring problem put forth by ITA Software. The original link is dead, but here is someone's solution with problem description.
At 57600 baud, you get to send 7200 bytes per second, which would be 3600 pixels at two bytes per pixel. As a square, this is a measly 60x60. You've certainly outlined more than that in your example, and this does not count the overhead.
The refresh rate of the monitor on the receiving end also needs to be considered. If the monitor is refreshing 60 times per second and you are only able to send one 60x60 square per second, how will this look?
Things to consider:
reduce color depth
run length encode pixel differences per scan line
attempt more ambitious compression per region, but watch the overhead
send non-graphic data and let the receiver compute the graphics (e.g. in this example, send the text that has changed, the updated time, etc. and let the receiver draw the progress bar, etc.)
abandon this insanity

Related

Fast subrects from layered image

I have this 2d raster upon which are layered from 1 to say 20 other 2d rasters (with random size and offset). I'm searching for fast way to access a sub-rectangle view (with random size and offset). The view should return all the layered pixels for each X and Y coordinate.
I guess this is kind of how say, GIMP or other 2d paint apps draw layers upon each other, with the exception that I want to have all the pixels upon each other, and not just projection where the top pixel hides the other ones below it.
I have met this problem and before and I still do now, spend already a lot time to search around internet and here about similar issues, but can't find any. I will describe two possible solution, both from which I'm not satisfied:
Have a basically 3d array of pre-allocated size. This is easy to manage but the storage wasted and memory overhead is really big. For 4k raster of say 16 slots, 4 bytes each, is like 1 GiB of memory? And in application case, most of that space will be wasted, not used.
My solution which I made before. Have two 2d arrays, one is with indices, the other with actual values. Each "pixel" of the first one says in which range of pixels in the second array you can find the actual pixels contributed from all layers. This is well compressed on size, but any request is bouncing between two memory regions and is a bit hassle to setup, not to mention update (a nice to have feature, but not mandatory).
So... any know-how on such kind of problem? Thank you in advance!
Forgot to add that I'm targeting self-sufficient, preferably single thread, CPU solution. The layers, will be most likely greyscale with alpha (that is, certain pixel data will not existent). Lookup operation is priority, updates like adding/removing a layer can be more slow.
Added by Mark (see comment):
In that image, if taking top-left corner of the red rectangle, a lookup should report red, green, blue and black. If the bottom-right corner is taken, it should report red and black only.
I would store the offsets and size in a data-structure separate from the pixel-data. This way you do not jump around in the memory while you calculate the relative coordinates for each layer (or even if you can ignore some layers).
If you want to access single pixels or small areas rather than iterating big areas a Quad-Tree might be a good idea to store your data with more local memory access while accessing pixels or areas which are near each other (in x or y direction).

How to estimate GIF file size?

We're building an online video editing service. One of the features allows users to export a short segment from their video as an animated gif. Imgur has a file size limit of 2Mb per uploaded animated gif.
Gif file size depends on number of frames, color depth and the image contents itself: a solid flat color result in a very lightweight gif, while some random colors tv-noise animation would be quite heavy.
First I export each video frame as a PNG of the final GIF frame size (fixed, 384x216).
Then, to maximize gif quality I undertake several gif render attempts with slightly different parameters - varying number of frames and number of colors in the gif palette. The render that has the best quality while staying under the file size limit gets uploaded to Imgur.
Each render takes time and CPU resources — this I am looking to optimize.
Question: what could be a smart way to estimate the best render settings depending on the actual images, to fit as close as possible to the filesize limit, and at least minimize the number of render attempts to 2–3?
The GIF image format uses LZW compression. Infamous for the owner of the algorithm patent, Unisys, aggressively pursuing royalty payments just as the image format got popular. Turned out well, we got PNG to thank for that.
The amount by which LZW can compress the image is extremely non-deterministic and greatly depends on the image content. You, at best, can provide the user with a heuristic that estimates the final image file size. Displaying, say, a success prediction with a colored bar. You'd can color it pretty quickly by converting just the first frame. That won't take long on 384x216 image, that runs in human time, a fraction of a second.
And then extrapolate the effective compression rate of that first image to the subsequent frames. Which ought to encode only small differences from the original frame so ought to have comparable compression rates.
You can't truly know whether it exceeds the site's size limit until you've encoded the entire sequence. So be sure to emphasize in your UI design that your prediction is just an estimate so your user isn't going to disappointed too much. And of course provide him with the tools to get the size lowered, something like a nearest-neighbor interpolation that makes the pixels in the image bigger. Focusing on making the later frames smaller can pay off handsomely as well, GIF encoders don't normally do this well by themselves. YMMV.
There's no simple answer to this. Single-frame GIF size mainly depends on image entropy after quantization, and you could try using stddev as an estimator using e.g. ImageMagick:
identify -format "%[fx:standard_deviation]" imagename.png
You can very probably get better results by running a smoothing kernel on the image in order to eliminate some high-frequency noise that's unlikely to be informational, and very likely to mess up compression performance. This goes much better with JPEG than with GIF, anyway.
Then, in general, you want to run a great many samples in order to come up with something of the kind (let's say you have a single compression parameter Q)
STDDEV SIZE W/Q=1 SIZE W/Q=2 SIZE W/Q=3 ...
value1 v1,1 v1,2 v1,3
After running several dozens of tests (but you need do this only once, not "at runtime"), you will get both an estimate of, say, , and a measurement of its error. You'll then see that an image with stddev 0.45 that compresses to 108 Kb when Q=1 will compress to 91 Kb plus or minus 5 when Q=2, and 88 Kb plus or minus 3 when Q=3, and so on.
At that point you get an unknown image, get its stddev and compression #Q=1, and you can interpolate the probable size when Q equals, say, 4, without actually running the encoding.
While your service is active, you can store statistical data (i.e., after you really do the encoding, you store the actual results) to further improve estimation; after all you'd only store some numbers, not any potentially sensitive or personal information that might be in the video. And acquiring and storing those numbers would come nearly for free.
Backgrounds
It might be worthwhile to recognize images with a fixed background; in that case you can run some adaptations to make all the frames identical in some areas, and have the GIF animation algorithm not store that information. This, when and if you get such a video (e.g. a talking head), could lead to huge savings (but would throw completely off the parameter estimation thing, unless you could estimate also the actual extent of the background area. In that case, let this area be B, let the frame area be A, the compressed "image" size for five frames would be A+(A-B)*(5-1) instead of A*5, and you could apply this correction factor to the estimate).
Compression optimization
Then there are optimization techniques that slightly modify the image and adapt it for a better compression, but we'd stray from the topic at hand. I had several algorithms that worked very well with paletted PNG, which is similar to GIF in many regards, but I'd need to check out whether and which of them may be freely used.
Some thoughts: LZW algorithm goes on in lines. So whenever a sequence of N pixels is "less than X%" different (perceptually or arithmetically) from an already encountered sequence, rewrite the sequence:
018298765676523456789876543456787654
987678656755234292837683929836567273
here the 656765234 sequence in the first row is almost matched by the 656755234 sequence in the second row. By changing the mismatched 5 to 6, the LZW algorithm is likely to pick up the whole sequence and store it with one symbol instead of three (6567,5,5234) or more.
Also, LZW works with bits, not bytes. This means, very roughly speaking, that the more the 0's and 1's are balanced, the worse the compression will be. The more unpredictable their sequence, the worse the results.
So if we can find out a way of making the distribution more **a**symmetrical, we win.
And we can do it, and we can do it losslessly (the same works with PNG). We choose the most common colour in the image, once we have quantized it. Let that color be color index 0. That's 00000000, eight fat zeroes. Now we choose the most common colour that follows that one, or the second most common colour; and we give it index 1, that is, 00000001. Another seven zeroes and a single one. The next colours will be indexed 2, 4, 8, 16, 32, 64 and 128; each of these has only a single bit 1, all others are zeroes.
Since colors will be very likely distributed following a power law, it's reasonable to assume that around 20% of the pixels will be painted with the first nine most common colours; and that 20% of the data stream can be made to be at least 87.5% zeroes. Most of them will be consecutive zeroes, which is something that LZW will appreciate no end.
Best of all, this intervention is completely lossless; the reindexed pixels will still be the same colour, it's only the palette that will be shifted accordingly. I developed such a codec for PNG some years ago, and in my use case scenario (PNG street maps) it yielded very good results, ~20% gain in compression. With more varied palettes and with LZW algorithm the results will be probably not so good, but the processing is fast and not too difficult to implement.

Expanding 8-bit color to 24-bit color

Setup
I have a couple hundred Sparkfun LED pixels (similar to https://www.sparkfun.com/products/11020) connected to an Arduino Uno and want to control the pixels from a PC using the built-in Serial-over-USB connection of the Arduino.
The pixels are individually adressable, each has 24 bits for the color (RGB). Since I want to be able to change the color of each pixel very quickly, the transmission of the data from the pc to the Arduino has to be very efficient (the further transmission of data from the Arduino to the pixels is very fast already).
Problem
I've tried simply sending the desired RGB-Values directly as is to the Arduino but this leads to a visible delay, when I want to for example turn on all LEDs at the same time. My straightforward idea to minimize the amount of data is to reduce the available colors from 24-bit to 8-bit, which is more than enough for my application.
If I do this, I have to expand the 8-bit values from the PC to 24-bit values on the Arduino to set the actual color on the pixels. The obvious solution here would be a palette that holds all available 8-bit values and the corresponding 24-bit colors. I would like to have a solution without a palette though, mostly for memory space reasons.
Question
What is an efficient way to expand a 8-bit color to a 24-bit one, preferrably one that preserves the color information accurately? Are there standard algorithms for this task?
Possible solution
I was considering a format with 2 bits for each R and B and 3 bits for G. These values would be packed into a single byte that would be transmitted to the Arduino and then be unpacked using bit-shifting and interpolated using the map() function (http://arduino.cc/en/Reference/Map).
Any thoughts on that solution? What would be a better way to do this?
R2B2G3 would give you very few colors (there's actually one more bit left). I don't know if it would be enough for your application. You can use dithering technique to make 8-bit images look a little better.
Alternatively, if you have any preferred set of colors, you can store known palette on your device and never send it over the wire. You can also store multiple palettes for different situations and specify which one to use with small integer index.
On top of that it's possible to implement some simple compression algorithm like RLE or LZW and decompress after receiving.
And there are some very fast compression libraries with small footprint you can use: Snappy, miniLZO.
Regarding your question “What would be a better way to do this?”, one of the first things to do (if not yet done) is increase the serial data rate. An Arduino Forum suggests using 115200 bps as a standard rate, and trying 230400 bps. At those rates you would need to write the receiving software so it quickly transfers data from the relatively small receive buffer into a larger buffer, instead of trying to work on the data out of the small receive buffer.
A second possibility is to put activation times into your data packets. Suppose F1, F2, F3... are a series of frames you will display on the LED array. Send those frames from the PC ahead of time, or during idle or wait times, and let the Arduino buffer them until they are scheduled to appear. When the activation time arrives for a given frame, have the Arduino turn it on. If you know in advance the frames but not the activation times, send and buffer the frames and send just activation codes at appropriate times.
Third, you can have multiple palettes and dynamic palettes that change on the fly and can use pixel addresses or pixel lists as well as pixel maps. That is, you might use different protocols at different times. Protocol 3 might download a whole palette, 4 might change an element of a palette, 5 might send a 24-bit value v, a time t, a count n, and a list of n pixels to be set to v at time t, 6 might send a bit map of pixel settings, and so forth. Bit maps can be simple 1-bit-per-pixel maps indicating on or off, or can be k-bits-per-pixel maps, where a k-bit entry could specify a palette number or a frame number for a pixel. This is all a bit vague because there are so many possibilities; but in short, define protocols that work well with whatever you are displaying.
Fourth, given the ATmega328P's small (2KB) RAM but larger (32KB) flash memory, consider hard-coding several palettes, frames, and macros into the program. By macros, I mean routines that generate graphic elements like arcs, lines, open or filled rectangles. Any display element that is known in advance is a candidate for flash instead of RAM storage.
Your (2, 3, 2) bit idea is used "in the wild." It should be extremely simple to try out. The quality will be pretty low, but try it out and see if it meets your needs.
It seems unlikely that any other solution could save much memory compared to a 256-color lookup table, if the lookup table stays constant over time. I think anything successful would have to exploit a pattern in the kind of images you are sending to the pixels.
Any way you look at it, what you're really going for is image compression. So, I would recommend looking at the likes of PNG and JPG compression, to see if they're fast enough for your application.
If not, then you might consider rolling your own. There's only so far you can go with per-pixel compression; size-wise, your (2,3,2) idea is about as good as you can expect to get. You could try a quadtree-type format instead: take the average of a 4-pixel block, transmit a compressed (lossy) representation of the differences, then apply the same operation to the half-resolution image of averages...
As others point out, dithering will make your images look better at (2,3,2). Perhaps the easiest way to dither for your application is to choose a different (random or quasi-random) fixed quantization threshold offset for each color of each pixel. Both the PC and the Arduino would have a copy of this threshold table; the distribution of thresholds would prevent posterization, and the Arduino-side table would help maintain accuracy.

Detecting empty pages in scanned documents

So we need to detect whether an image, created by a scanner, represents an empty page. I'm way out of my depth when it comes to image processing, so I have to run this by the community.
Here's what I have come up with so far:
Empty pages can be glaringly white, gray recycled paper, or yellowed old paper. The current idea is to create a histogram for a page, look for a steep increase of the curve, and get the percentage of pixels are darker than that. If that exceeds a threshold, the page is likely not empty.
Since this would likely classify a page containing a single line of text at the top as empty, we would tile the page and gather statistics about each tile.
We would need to detect scanned staplers and holes from binding (likely only in certain tiles), but this can be put off to some later stage. However, if you have an idea what to look out for besides these two, please mention it in a comment.
This needs to be fast. It's part of a document processing workflow that processes (tens of) thousands of pages per day. If processing a page takes ten seconds longer, than our customers will have to tell their customers that they'll have to wait several days longer for their results. (If this results in more false positives, some customers would rather have someone check a few dozen found "empty" pages, than have their customer wait one more day.)
So here's my questions:
Is it a good idea to take this route or is there something better?
If we do it this way, how would I do this? What's a good (cheap) algorithm for finding a threshold for a page? Could we gain significant speed by assuming a similar threshold for a batch of documents? To which precision could brightness values be rounded, before getting logged? What quirks could we expect?
If you know that a scanned page is going to fill the image entirely, then calculating the standard deviation might be a good way of doing this.
I would suggest blurring page slightly to reduce some noise. Then calculate the SD for the page, in theory, a page the is more or less all one colour will have a low SD and one with lots of text will have a higher SD. Then it's a case of 'training' the system to work out when a page is plain and when it is text. You might find that certain pages are hard for it to tell.
You could have it trained by having it process a vast number of pages, and it goes through them all, and you say if it is plain or not.
EDIT
ok, a white page with black text, if we have just the page and no surrounding stuff, will have a mean colour of grey, probably a fairly light grey. Getting the average is a for loop through all the pixels, adding their values and then dividing by the number of pixels. I'm not good with this o(logN) stuff, but suffice to say, it will not that long. Unless you have HUGE images.
SD is a second for loop, this time we are counting up how different each pixel is from the mean, and then dividing by the mean. This will take a bit longer then the mean, as we have to do something like
diff = thispixel - mean;
if(diff < 0) {
diff = -diff;
}
runningTotal += diff;
For a plain coloured page, each pixel will be close to the mean value, thus our SD will be low. If the SD is below a certain value, we can assume that this means the page is all one colour.
This might have problems if their is very minimal amount of text, as it will not have a large influence on the SD, so maybe like you suggested in the question, break the page into sections. I suggest strips horizontally, as text tends to go this way. If we do one of these strips one at a time, once one strip suggests it has text, we can stop as we don't care if the rest is blank or not.
Blurring the page will help reduce noise, as the odd pixel of noise will be reduced in its impact, thus give you a 'tighter' SD. You could also use it to reduce the resolution of your image.
Say you sauce image is 300 wide by 900 high, you could sample pixels in blocks of nine, 3 *3, and thus end up with an image that is 100 wide by 300 high, so it can actually be used to reduce the amount of calculations you need to do, in this case by a ninth!
The main problem is going to be in working out how high an SD can be with just a plain page. Maybe have it find the SD of a load of blank pages.
By the sounds of it, you are probably going to want to have a middle ground that lets it be unsure and ask for human intervention, possibly letting the human value train the system to get better?
Perform some sort of simple edge detection. If the number of pixels constituting edges is below some threshold, then there's going to be a high probability the page is empty. This could be improved by classifying certain edges that correspond with high certainty (by shape and location) to punched holes and staples as trivial and discounting them from the metric.
When I worked for a document processor (~8 years ago), we handled client projects varying from very "clean" only-US-letter-sized pages to cover-/cardstock of irregular shapes mixed with normal pages. Operators fed pre-sorted files into scanning machines and only had to watch for folded corners and similar mechanical problems. Their output was multiple streams of hundreds of images corresponding to a range of files. A single scanner operator could easily scan 15k pieces of paper in a shift (that's only 0.60 pages/sec, while a scanner at speed could handle 3 pages/sec and still scan both sides). Later operators processed those looking for key pages to mark file start and end. (Image recognition can be used here, sometimes, but people also provide a quality check on the first operators.) We had many variables that could be set per client project.
I'm basing the rough outline below on that experience and how it appears that your goals and workflow are similar.
(Terminology: By client I mean our client, e.g. a specific bank. A project or client project is a set of documents from that client that contains many files, e.g. all mortgages handled by a specific branch in a given year. A file is a logical arrangement that would normally be a physical file folder for one of the client's customers, e.g. all mortgage papers for one address.)
Cut off the top, bottom, sides, and corners. Throw these out of your calculations (even though you'll probably store them in the final image). This will cover staple holes, binder holes, but also (tiny) folded corners and very minute torn edges which appear as black spots. Depending on how you're scanning, the last two may be less of a problem.
Vary the sizes of these cuts for each client project, as required. For example, even a very thin edge slice, say 1-2mm, will eliminate most ragged edges without increasing false positive rate.
Convert to black and white, 1 bit per pixel. I suspect you are already doing this for some client projects anyway, so doing this efficiently and effectively, which can be subtle, should be no extra work. (Even if you don't store the 1bpp image as the deliverable result, the conversion will be helpful in empty page detection.) Eliminate noise by dropping any black pixels with none or only one black neighbor (using all surrounding 8 as neighbors).
After cutting extremities (#1) and this simplistic noise reduction, blank pages will have a very low number of black pixels; most blanks will have no black pixels at all – exempting exceptionally poor page quality, inked stamps (when scanning back-sides, mentioned more below), or other circumstances across the whole project, and so forth.
Depending on client project, you may set hotspots to be watched – the converse of cutting off the sides. For example, watching a 1" strip where a single line at the top of the page would appear may reduce false positives. A low contrast scan or faded hardcopy (perhaps even pencil, which can be common on back-sides) with only one line of text will be distinguished from a blank page this way.
What sections are worth watching depends on each project, but do not try to divide the page up into tiles and then subdivide those tiles into areas of interest. Instead, parallelize this on the page level; e.g. 1 worker per core, each worker handles a full page at a time.
Depending on how you're keying individual files, you may find it helpful to drop blanks (before marking start-of-file pages, which is still often a manual process even at high volume) then watch for blank pages at unexpected points after files have been keyed (e.g. expected would be the last page of the file, without being two blanks in a row, etc.).
For example, if a particular project is only scanning one side of each page, then detecting two blank pages in a row is a good indication that a couple pages in a file were flipped upside-down (clients often hand over hardcopy files like this). Either the sorters (who remove things like staples and paperclips) or the first machine operators should have caught this mistake, but, regardless, it will now need a manual check to verify.
On the other hand, there were projects that had very clean files so sorters could insert (usually colored) blank pages marking file boundaries. In this case, the second set of people still did the keying by file number, but only had to examine the first page of each file. This wasn't rare, but not common either.
Before I start rambling a bit, I hope my main point comes across: you have to decide how to mitigate rates of false positives (= data loss) and false negatives (= annoying blanks and otherwise harmless, but a maximum allowed rate may still be specified in the project contract). That varies drastically by project and the type of files/documents you're handling, but it guides you in how to do the detection. You will get much better results from a tailored approach than trying one-size-fits-all, even if the tailored approaches are 80-98% similar.
If you're delivering 1bpp images to the client, for example, you might not even want/need to eliminate blanks as filesize (and ultimately size of the delivered dataset) won't be an issue. This can be an acceptable trade-off when eliminating most blanks is harder while maintaining a low false positive rate; such as for files with inked stamps ("received on", "accepted", "due date", etc.; they bleed through to the back) or other problems, for example.
My fall class does a bunch of image-processing projects.
Here's what I would try:
Project from color to grayscale
Pour all the pixels into a simple histogram with say 100 buckets between 0 and 1
Find a local minimum in the histogram such that the absoluete value of above - below is as small as possible, where above is the number of brighter pixels and below is the number of darker pixels
Force the above pixels to white and the below pixels to black
If you like, as an extra step you could remove black edges
If there are hardly any black pixels, the page is blank
The first two steps should be combined, and they are the only time-consuming steps; on a 600dpi images you may have to touch many millions of pixels. The rest will be lightning fast. I'd be very surprised if you can't classify multiple images per second—especially if you know there will be no black edges.
The only part that requires training or experiment is the last step. It's also possible that you will need to fiddle around with the number of buckets in the histogram; if there are too many buckets, you may have a bad local minimum.
Good luck, and report back to us how you make out!
Check out this line detection algorithm: http://homepages.inf.ed.ac.uk/rbf/HIPR2/linedet.htm. In addition to a detailed explanation of how the algo works there's a demo where you can use your own image and see the results. I tried two images: 1) a B&W scan of a receipt, 2) the B&W, "blank" back side of that same receipt. All of the edge detection algorithms I tried found edges on the "blank" page. But, this line detection algorithm was the only algorithm that correctly found lines on the front page and yet didn't find anything on the "blank" back page.
It looks as if you're trying to convert all paperwork for a company into digital documents. Some of this paper can be really old.
Say your text is black, and any other color is the background. If you take two weighted averages, one consisting of what you think is the text, and one consisting of the background, you can compare those two and see if they're distant enough to consider further evaluation. This will removing any uneven aging of the paper.
Staple holes and punched holes in paper are pretty standard in size, but they'd show up as gray or not at all if you're scanning on a white background. If not, then you can guess where these are and remove them.
Now, we look at areas of high interest, areas where the black pixels are the most dense. Select a portion of that and OCR it. Place the starting top-left closest to an area where text begins. On a typical document, a solid blank linear area going left-to-right and another going top-to-bottom denotes the top and left sides of a paragraph. You can be sure that you got a line of text because below a line of text is another blank left-to-right area. So you don't need to worry about selecting a portion that will chop text in half.
You could take the mean gray level (integer) of each few rows of the scanned image (depending on the resolution and how many lines are required to capture one line of text), then consider the spread of row means. If there is no text on the page, the spread of means should be small (i.e. background ranges from 250-255), and if there is text on the whole page or on part of the page, the spread would be much larger (i.e. 15 for text to 250 for background).
Seems to me like the solution should be computationally simple due to the large number of pages to check. Approaches requiring further processing (edge detection, filtering, etc) seem like overkill, and will take much longer to run.
There is no need to process pixel by pixel, using matrices will help this be more efficient, for example using Numpy you can calculate means, sums, etc. for entire rows, columns or matrices at once much more efficiently. There is also no need to process EVERY pixel, a good sample of rows should be able to accomplish the task with similar accuracy. 8bit accuracy should be fine, and you could even resample to large pixels before running this processing algorithm.
You can do a noisy trim, i.e. blur the image and do an auto-trim (without actually modifying the image). If the width or height of the trim result is below a threshold (e.g. 80 to 100 for a 600 dpi image) then the page is empty.
A proof of concept using the ImageMagick command line front-end:
$ convert scan.png -shave 300x0 -virtual-pixel White -blur 0x15 -fuzz 15% \
-trim info:
The above command assumes a 600 dpi DIN A4 black and white (1 Bit) image. It also ignores a margin of 300 pixels such that artifacts like perforation holes don't yield false negatives.

Good compression algorithm for low entropy image

I am currently trying to further compress a very simple image. The image uses 2 sets of colors as well as 1 character per "pixel". each set of color may be 1 of 16 options. Because of this I have already combined both colors into 1 byte per pixel representing both of them. I already implemented MTF and BWT encoding methods to assist in RLE. I am positive I can get some more compression out of it however I am not sure what algorithm to use. I have tried huffman however because of the fact the image tends to be small already and RLE compresses most of it due to the lack of entropy, huffman half the time increases the size by adding its decoding table to the file. Please note this will also be run on a slower system so any really heavy algorithms may not work either.
First off, it sounds like you should compress the background and character color images separately. Second, you say that "the colors don't change too often from pixel to pixel". Are some colors "closer" to each other than others? I.e., when color changes from color x, is it more likely to change to a small subset of the remaining colors? If so, you can map the colors to be more adjacent to those they are likely to change to, and taking differences before coding. Then runs of the same color become runs of zeros, and changes to the "next" color become ones.
Once you have a good representation as a series of bytes with lots of runs and a skewed probability of occurrence of bytes values, e.g. lots of zeros and one, then apply zlib or gzip to take advantage of the apparent redundancy and skew.

Resources