Combining 2 histogram data for a new Data in OpenCV - image

I want to compare 2 images and if they are similar than I keep the 2 images. I compute HSV histogram for each image and compare the distance between the histogram.
Now when the 3rd image is obtained I have to compare it to image 1 and image 2 (already stored as one similar type image).
The problem of comparing like above is that increase in images also increase computational cost.
So what i want to do its if 2 images are similar than I want to cluster there features as one hence the features of similar images in future will be compare to the clustered features.
OPTION 1
How if I merge the 2 histogram will it be correct? I dont think so but I am not sure?
OPTION 2
How about using the 2 image feature distribution, I compute a new distribution from the 2 histogram as a combined distribution of both images. ? Does this sound correct?

Let me take it step by step:
Task: compare 2 images. Keep both if they are similar, else merge somehow. Feature Space: HSV histogram.
OPTION 1 Is it correct to merge histograms?
Yes, since you use histograms and not signatures you can just the bins of the two histograms and divide by two.
Excursion: If you want to merge additional images, you have to keep track of the number of already merged, so you know how to weight
Example: histogram with one bin, three pictures
with p1=2, p2=6, p3=10
merge p1,p2 to m_12: (2+6)/2 = 4
merge m_12 and p3:
((weight * value m_12) + (weigth * value p3)) / 2
= ( (2/3 * 4) + (1/3 * 10) ) / 2
= 6 [equal to (p1+p2+p3) / 3]
tl;dr yes you can merge them
OPTION 2 How about using the 2 image feature distribution, I compute a new distribution from the 2 histogram as a combined distribution of both images. ? Does this sound correct?
Yes, although i don't know immediately how you want to do it.
If you want to speed your program up, you should check out different distance measures (i only remember SQFD and Earth Movers Distance for signatures unfortunately atm). Often they have a fast but coarse lower bound. That can be used to get a good lower bound for the distance, so you can reduce your search space.
Increase in images also increase computational cost.
Check out hierarchical clustering to find data structures that are suited for large numbers of images.

Related

Structure and an algorithm for grouping a large set of pairwise image with similarity distances (c++)

I want to find similar images in a very large dataset (at least 50K+ images, potentially much more).
I already successfully implemented several "distance" functions (hashes compared with L2 or Hamming distance for example, image features with % of similarity, etc) - the result is always a "double" number.
What I want now is "grouping" (clustering?) images by similarity. I already achieved some pretty good results, but the groups are not perfect: some images that could be grouped with others are left aside, so my method is not that good.
I've been looking for a solution these last 3 days, but things are not so clear in my head, and maybe I overlooked a possible method?
I already have image pairs with distance : [image A (index, int), image B (index, int), distance (double)], and a list of duplicates (image X similar to images Y, Z, T, image Y similar to X, T, G, F --- etc).
My problem:
find a suitable and efficient algorithm to group images by similarity from the list of duplicates and the pairwise distances - for me the problem is not really spatial because image indexes A and B are NOT coordinates, but there is a 1-n relation between images - one method I found interesting is DBSCAN, or maybe hiearchical solutions would work?
use an efficient structure that is not too memory-hungry, so full matrices of doubles are excluded (50K x 50k or 100k x 100k, or worse 1M x 1M is not reasonable, the more images there are the more matrices "eat" memory, and what's more the matrix would be symmetric because "image A similar to image B" is the same as "B similar to A" so there would be a terrible waste of memory space)
I'm coding with C++, using Qt6 for the interface and OpenCV 4.6 for some image functions, some hashing methods, etc.
Any idea/library/structure to propose? Thanks in advance.
EDIT - to better explain what I want to achieve
Images are the yellow circles.
Image 1 is similar to image 4 with a score=3 and to 5 with a score=2
etc
The problem is that image 4 is also similar to image 5, so image 4 is more similar to 1 than 5.
The example I put here is very simple because there are no more than 2 similar images for each image. With a bigger sample, image 4 could be similar to n images... And what about equal scores?
So is there an algorithm to create groups of images, so that no image is listed twice?
The answers to my own question:
about the structure itself : it is called an "undirected weighted graph" --- English is not my native language and I had a hard time to first find the right words, and then the solution was quickly found!
clustering: there are several algorithms associated to graphs so I'll try some of them
Many thanks to #Similar_Pictures for taking the time to answer me, and opening my eyes upon the fact that the better the similarity algorithm(s), the less is the need to use complicated clustering techniques...
I am actually testing how to combine several similarity techniques: each one has its defaults, but together some work best, using refined thresholds.

Starting point for image recognition?

I have a set of 274 color images (each one is 200x150 pixels). Each image is visually distinct. I would like to build an app which accepts an up/down-scaled version of one of the base set of images and determines the closest match.
I'm a senior software engineer but am totally new to image recognition. I'd really appreciate any recommendations as to where to start.
If you're comparing extremely similar images, it's in theory sufficient to calculate the Euclidean distance between the 2 images. The images must be the same size to do so, so it is often necessary to rescale an image to do so (generally the larger image is scaled down). Note that aliasing issues can happen here, so pay some attention to your downsampling algorithm. There's also an issue if your images don't have the same aspect ratio.
However, this is almost never done in practice since it's extremely slow. For N images of size WxH and 3 color channels, it requires N x W x H x 3 comparisons, which quickly gets unworkable (consider that many users can have over 1000 images of size >1000x1000).
Generally we attempt to reduce the image to a smaller array that captures the image information much more briefly, called a visual descriptor. For example taking a 1024x1024x3 image and reducing it to a 128 length vector. This needs only be calculated once for the reference images, and then stored in an appropriate data structure. Then we can compare the descriptor for the query image against the descriptor for the reference images.
The cost of calculating the distance for our dataset of N images for a descriptor of length L is then N x L instead of the original N x W x H x 3
So the issue is to find efficient descriptors that are (a) cheap to compute and (b) capture the image accurately. This is still an active area of research, but I can suggest some:
Histograms are probably the simplest way to do this, although they do very poorly with any illumination change and incorporate only color information, no spatial information. Make sure you normalise your histogram before doing any comparison
Perceptual hashing works well with very similar images or slightly cropped images. See here
GIST descriptors are powerful, but more complex, see here

Visual similarity search algorithm

I'm trying to build a utility like this http://labs.ideeinc.com/multicolr,
but I don't know which algorithm they are using, Does anyone know?
All they are doing is matching histograms.
So build a histogram for your images. Normalize the histograms by size of image. A histogram is a vector with as many elements as colors. You don't need 32,24, and maybe not even 16 bits of accuracy and this will just slow you down. For performance reasons, I would map the histograms down to 4, 8, and 10-12 bits.
Do a fuzzy least distance compare between the all the 4 bit histograms and your sample colors.
Then take that set and do the 8 bit histogram compare.
Then maybe go up to a 10 or 12 bit histogram compare with the remaining set. This will be the highest performance search, because you are comparing the total set with a very small number of calculations, to find a small subset.
Then you work on the small subset with a higher number of calculations, etc.
The real big trick is to find the best algorithm for matching similar histograms.
Start with the distance calculation. In 3 dimensions i think it was:
SQRT((x1-x2)^2 + (y1-y2)^2 + (z1-z2)^2)
I'm doing this from memory, so look it up to make sure.
For your purposes, you will have more than 3 dimensions, so you will have more terms. A 4 bit histogram would have 16 terms, an 8 bit one would have 256 terms, etc. Remember that this kind of math is slow, so don't actually do the SQRT part. If you normalize the size of your images small enough, say down to 10,000 pixels, then you know you only will ever have to do x^2 for values 0..10,0000. Pre-calculate a lookup table of x^2 where x goes from 0..10,000. Then your calculations will go fast.
When you select a color from the palette, just make a histogram with that color = 10,0000. When select 2, make a histogram with color1=5000, color2=5000 etc.
In the end you will have to add in fudge factors to make the application match the real world, but you will find these with testing.
I'd suggest you do some kind of clustering of the colors present in the images in your database. I mean, for each image in your database:
collect the colors of each pixel in the image
perform clustering (let's say k-mean clustering with 5 clusters) on the collected colors
store the clustered colors as representative descriptor of the image
When the user provides a set of one or more query colors you do some kind of greedy matching choosing the best match between the given colors and the color descriptor (the 5 reprsentative colors) of each image in your database.
What is the size of your image collection, because depending on the size some search indexing can be a bigger problem than the alogorith itself?
Probably just creating a histogram of the colors used in the images, then doing a best fit to the user-selected colors.

How can I choose an image with higher contrast in PHP?

For a thumbnail-engine I would like to develop an algorithm that takes x random thumbnails (crop, no resize) from an image, analyzes them for contrast and chooses the one with the highest contrast. I'm working with PHP and Imagick but I would be glad for some general tips about how to compute contrast of imagery.
It seems that many things are easier than computing contrast, for example counting colors, computing luminosity,etc.
What are your experiences with the analysis of picture material?
I'd do it that way (pseudocode):
L[256] = {0,0,0...}
loop over each pixel:
luminance = avg(R,G,B)
increment L[luminance] by 1
for i = 0 to 255:
if L[i] < C: L[i] = 0 // C = threshold of your chose
find index of first and last non-zero value of L[]
contrast = last - first
In looking for the image "with the highest contrast," you will need to be very careful in how you define contrast for the image. In the simplest way, contrast is the difference between the lowest intensity and the highest intensity in the image. That is not going to be very useful in your case.
I suggest you use a histogram approach to describe the contrast of a given image and then compare the properties of the histograms to determine the image with the highest contrast as you define it. You could use a variety of well known containers to represent the histogram in code, or construct a class to meet your specific needs. (I am not implying that you need to create a histogram in the form of a chart – just a statistical representation of the intensity values.) You could use the variance of each histogram directly as a measure of contrast, or use the standard deviation if that is easier to work with.
The key really lies in how you define the contrast of the image. In general, I would define a high contrast image as one with values present for all, or nearly all, the possible values. And I would further add that in this definition of a high contrast image, the intensity values of the image will tend to be distributed across the range of possible values in a uniform way.
Using this approach, a low contrast image would tend to have relatively few discrete intensity values and they would tend to be closely grouped together rather than uniformly distributed. (As a general rule, they will also tend to be grouped toward the center of the range.)

Image comparison - fast algorithm

I'm looking to create a base table of images and then compare any new images against that to determine if the new image is an exact (or close) duplicate of the base.
For example: if you want to reduce storage of the same image 100's of times, you could store one copy of it and provide reference links to it. When a new image is entered you want to compare to an existing image to make sure it's not a duplicate ... ideas?
One idea of mine was to reduce to a small thumbnail and then randomly pick 100 pixel locations and compare.
Below are three approaches to solving this problem (and there are many others).
The first is a standard approach in computer vision, keypoint matching. This may require some background knowledge to implement, and can be slow.
The second method uses only elementary image processing, and is potentially faster than the first approach, and is straightforward to implement. However, what it gains in understandability, it lacks in robustness -- matching fails on scaled, rotated, or discolored images.
The third method is both fast and robust, but is potentially the hardest to implement.
Keypoint Matching
Better than picking 100 random points is picking 100 important points. Certain parts of an image have more information than others (particularly at edges and corners), and these are the ones you'll want to use for smart image matching. Google "keypoint extraction" and "keypoint matching" and you'll find quite a few academic papers on the subject. These days, SIFT keypoints are arguably the most popular, since they can match images under different scales, rotations, and lighting. Some SIFT implementations can be found here.
One downside to keypoint matching is the running time of a naive implementation: O(n^2m), where n is the number of keypoints in each image, and m is the number of images in the database. Some clever algorithms might find the closest match faster, like quadtrees or binary space partitioning.
Alternative solution: Histogram method
Another less robust but potentially faster solution is to build feature histograms for each image, and choose the image with the histogram closest to the input image's histogram. I implemented this as an undergrad, and we used 3 color histograms (red, green, and blue), and two texture histograms, direction and scale. I'll give the details below, but I should note that this only worked well for matching images VERY similar to the database images. Re-scaled, rotated, or discolored images can fail with this method, but small changes like cropping won't break the algorithm
Computing the color histograms is straightforward -- just pick the range for your histogram buckets, and for each range, tally the number of pixels with a color in that range. For example, consider the "green" histogram, and suppose we choose 4 buckets for our histogram: 0-63, 64-127, 128-191, and 192-255. Then for each pixel, we look at the green value, and add a tally to the appropriate bucket. When we're done tallying, we divide each bucket total by the number of pixels in the entire image to get a normalized histogram for the green channel.
For the texture direction histogram, we started by performing edge detection on the image. Each edge point has a normal vector pointing in the direction perpendicular to the edge. We quantized the normal vector's angle into one of 6 buckets between 0 and PI (since edges have 180-degree symmetry, we converted angles between -PI and 0 to be between 0 and PI). After tallying up the number of edge points in each direction, we have an un-normalized histogram representing texture direction, which we normalized by dividing each bucket by the total number of edge points in the image.
To compute the texture scale histogram, for each edge point, we measured the distance to the next-closest edge point with the same direction. For example, if edge point A has a direction of 45 degrees, the algorithm walks in that direction until it finds another edge point with a direction of 45 degrees (or within a reasonable deviation). After computing this distance for each edge point, we dump those values into a histogram and normalize it by dividing by the total number of edge points.
Now you have 5 histograms for each image. To compare two images, you take the absolute value of the difference between each histogram bucket, and then sum these values. For example, to compare images A and B, we would compute
|A.green_histogram.bucket_1 - B.green_histogram.bucket_1|
for each bucket in the green histogram, and repeat for the other histograms, and then sum up all the results. The smaller the result, the better the match. Repeat for all images in the database, and the match with the smallest result wins. You'd probably want to have a threshold, above which the algorithm concludes that no match was found.
Third Choice - Keypoints + Decision Trees
A third approach that is probably much faster than the other two is using semantic texton forests (PDF). This involves extracting simple keypoints and using a collection decision trees to classify the image. This is faster than simple SIFT keypoint matching, because it avoids the costly matching process, and keypoints are much simpler than SIFT, so keypoint extraction is much faster. However, it preserves the SIFT method's invariance to rotation, scale, and lighting, an important feature that the histogram method lacked.
Update:
My mistake -- the Semantic Texton Forests paper isn't specifically about image matching, but rather region labeling. The original paper that does matching is this one: Keypoint Recognition using Randomized Trees. Also, the papers below continue to develop the ideas and represent the state of the art (c. 2010):
Fast Keypoint Recognition using Random Ferns - faster and more scalable than Lepetit 06
BRIEF: Binary Robust Independent Elementary Features - less robust but very fast -- I think the goal here is real-time matching on smart phones and other handhelds
The best method I know of is to use a Perceptual Hash. There appears to be a good open source implementation of such a hash available at:
http://phash.org/
The main idea is that each image is reduced down to a small hash code or 'fingerprint' by identifying salient features in the original image file and hashing a compact representation of those features (rather than hashing the image data directly). This means that the false positives rate is much reduced over a simplistic approach such as reducing images down to a tiny thumbprint sized image and comparing thumbprints.
phash offers several types of hash and can be used for images, audio or video.
This post was the starting point of my solution, lots of good ideas here so I though I would share my results. The main insight is that I've found a way to get around the slowness of keypoint-based image matching by exploiting the speed of phash.
For the general solution, it's best to employ several strategies. Each algorithm is best suited for certain types of image transformations and you can take advantage of that.
At the top, the fastest algorithms; at the bottom the slowest (though more accurate). You might skip the slow ones if a good match is found at the faster level.
file-hash based (md5,sha1,etc) for exact duplicates
perceptual hashing (phash) for rescaled images
feature-based (SIFT) for modified images
I am having very good results with phash. The accuracy is good for rescaled images. It is not good for (perceptually) modified images (cropped, rotated, mirrored, etc). To deal with the hashing speed we must employ a disk cache/database to maintain the hashes for the haystack.
The really nice thing about phash is that once you build your hash database (which for me is about 1000 images/sec), the searches can be very, very fast, in particular when you can hold the entire hash database in memory. This is fairly practical since a hash is only 8 bytes.
For example, if you have 1 million images it would require an array of 1 million 64-bit hash values (8 MB). On some CPUs this fits in the L2/L3 cache! In practical usage I have seen a corei7 compare at over 1 Giga-hamm/sec, it is only a question of memory bandwidth to the CPU. A 1 Billion-image database is practical on a 64-bit CPU (8GB RAM needed) and searches will not exceed 1 second!
For modified/cropped images it would seem a transform-invariant feature/keypoint detector like SIFT is the way to go. SIFT will produce good keypoints that will detect crop/rotate/mirror etc. However the descriptor compare is very slow compared to hamming distance used by phash. This is a major limitation. There are a lot of compares to do, since there are maximum IxJxK descriptor compares to lookup one image (I=num haystack images, J=target keypoints per haystack image, K=target keypoints per needle image).
To get around the speed issue, I tried using phash around each found keypoint, using the feature size/radius to determine the sub-rectangle. The trick to making this work well, is to grow/shrink the radius to generate different sub-rect levels (on the needle image). Typically the first level (unscaled) will match however often it takes a few more. I'm not 100% sure why this works, but I can imagine it enables features that are too small for phash to work (phash scales images down to 32x32).
Another issue is that SIFT will not distribute the keypoints optimally. If there is a section of the image with a lot of edges the keypoints will cluster there and you won't get any in another area. I am using the GridAdaptedFeatureDetector in OpenCV to improve the distribution. Not sure what grid size is best, I am using a small grid (1x3 or 3x1 depending on image orientation).
You probably want to scale all the haystack images (and needle) to a smaller size prior to feature detection (I use 210px along maximum dimension). This will reduce noise in the image (always a problem for computer vision algorithms), also will focus detector on more prominent features.
For images of people, you might try face detection and use it to determine the image size to scale to and the grid size (for example largest face scaled to be 100px). The feature detector accounts for multiple scale levels (using pyramids) but there is a limitation to how many levels it will use (this is tunable of course).
The keypoint detector is probably working best when it returns less than the number of features you wanted. For example, if you ask for 400 and get 300 back, that's good. If you get 400 back every time, probably some good features had to be left out.
The needle image can have less keypoints than the haystack images and still get good results. Adding more doesn't necessarily get you huge gains, for example with J=400 and K=40 my hit rate is about 92%. With J=400 and K=400 the hit rate only goes up to 96%.
We can take advantage of the extreme speed of the hamming function to solve scaling, rotation, mirroring etc. A multiple-pass technique can be used. On each iteration, transform the sub-rectangles, re-hash, and run the search function again.
My company has about 24million images come in from manufacturers every month. I was looking for a fast solution to ensure that the images we upload to our catalog are new images.
I want to say that I have searched the internet far and wide to attempt to find an ideal solution. I even developed my own edge detection algorithm.
I have evaluated speed and accuracy of multiple models.
My images, which have white backgrounds, work extremely well with phashing. Like redcalx said, I recommend phash or ahash. DO NOT use MD5 Hashing or anyother cryptographic hashes. Unless, you want only EXACT image matches. Any resizing or manipulation that occurs between images will yield a different hash.
For phash/ahash, Check this out: imagehash
I wanted to extend *redcalx'*s post by posting my code and my accuracy.
What I do:
from PIL import Image
from PIL import ImageFilter
import imagehash
img1=Image.open(r"C:\yourlocation")
img2=Image.open(r"C:\yourlocation")
if img1.width<img2.width:
img2=img2.resize((img1.width,img1.height))
else:
img1=img1.resize((img2.width,img2.height))
img1=img1.filter(ImageFilter.BoxBlur(radius=3))
img2=img2.filter(ImageFilter.BoxBlur(radius=3))
phashvalue=imagehash.phash(img1)-imagehash.phash(img2)
ahashvalue=imagehash.average_hash(img1)-imagehash.average_hash(img2)
totalaccuracy=phashvalue+ahashvalue
Here are some of my results:
item1 item2 totalsimilarity
desk1 desk1 3
desk1 phone1 22
chair1 desk1 17
phone1 chair1 34
Hope this helps!
As cartman pointed out, you can use any kind of hash value for finding exact duplicates.
One starting point for finding close images could be here. This is a tool used by CG companies to check if revamped images are still showing essentially the same scene.
I have an idea, which can work and it most likely to be very fast.
You can sub-sample an image to say 80x60 resolution or comparable,
and convert it to grey scale (after subsampling it will be faster).
Process both images you want to compare.
Then run normalised sum of squared differences between two images (the query image and each from the db),
or even better Normalised Cross Correlation, which gives response closer to 1, if
both images are similar.
Then if images are similar you can proceed to more sophisticated techniques
to verify that it is the same images.
Obviously this algorithm is linear in terms of number of images in your database
so even though it is going to be very fast up to 10000 images per second on the modern hardware.
If you need invariance to rotation, then a dominant gradient can be computed
for this small image, and then the whole coordinate system can be rotated to canonical
orientation, this though, will be slower. And no, there is no invariance to scale here.
If you want something more general or using big databases (million of images), then
you need to look into image retrieval theory (loads of papers appeared in the last 5 years).
There are some pointers in other answers. But It might be overkill, and the suggest histogram approach will do the job. Though I would think combination of many different
fast approaches will be even better.
I believe that dropping the size of the image down to an almost icon size, say 48x48, then converting to greyscale, then taking the difference between pixels, or Delta, should work well. Because we're comparing the change in pixel color, rather than the actual pixel color, it won't matter if the image is slightly lighter or darker. Large changes will matter since pixels getting too light/dark will be lost. You can apply this across one row, or as many as you like to increase the accuracy. At most you'd have 47x47=2,209 subtractions to make in order to form a comparable Key.
Picking 100 random points could mean that similar (or occasionally even dissimilar) images would be marked as the same, which I assume is not what you want. MD5 hashes wouldn't work if the images were different formats (png, jpeg, etc), had different sizes, or had different metadata. Reducing all images to a smaller size is a good bet, doing a pixel-for- pixel comparison shouldn't take too long as long as you're using a good image library / fast language, and the size is small enough.
You could try making them tiny, then if they are the same perform another comparison on a larger size - could be a good combination of speed and accuracy...
What we loosely refer to as duplicates can be difficult for algorithms to discern.
Your duplicates can be either:
Exact Duplicates
Near-exact Duplicates. (minor edits of image etc)
perceptual Duplicates (same content, but different view, camera etc)
No1 & 2 are easier to solve. No 3. is very subjective and still a research topic.
I can offer a solution for No1 & 2.
Both solutions use the excellent image hash- hashing library: https://github.com/JohannesBuchner/imagehash
Exact duplicates
Exact duplicates can be found using a perceptual hashing measure.
The phash library is quite good at this. I routinely use it to clean
training data.
Usage (from github site) is as simple as:
from PIL import Image
import imagehash
# image_fns : List of training image files
img_hashes = {}
for img_fn in sorted(image_fns):
hash = imagehash.average_hash(Image.open(image_fn))
if hash in img_hashes:
print( '{} duplicate of {}'.format(image_fn, img_hashes[hash]) )
else:
img_hashes[hash] = image_fn
Near-Exact Duplicates
In this case you will have to set a threshold and compare the hash values for their distance from each
other. This has to be done by trial-and-error for your image content.
from PIL import Image
import imagehash
# image_fns : List of training image files
img_hashes = {}
epsilon = 50
for img_fn1, img_fn2 in zip(image_fns, image_fns[::-1]):
if image_fn1 == image_fn2:
continue
hash1 = imagehash.average_hash(Image.open(image_fn1))
hash2 = imagehash.average_hash(Image.open(image_fn2))
if hash1 - hash2 < epsilon:
print( '{} is near duplicate of {}'.format(image_fn1, image_fn2) )
If you have a large number of images, look into a Bloom filter, which uses multiple hashes for a probablistic but efficient result. If the number of images is not huge, then a cryptographic hash like md5 should be sufficient.
I think it's worth adding to this a phash solution I built that we've been using for a while now: Image::PHash. It is a Perl module, but the main parts are in C. It is several times faster than phash.org and has a few extra features for DCT-based phashes.
We had dozens of millions of images already indexed on a MySQL database, so I wanted something fast and also a way to use MySQL indices (which don't work with hamming distance), which led me to use "reduced" hashes for direct matches, the module doc discusses this.
It's quite simple to use:
use Image::PHash;
my $iph1 = Image::PHash->new('file1.jpg');
my $p1 = $iph1->pHash();
my $iph2 = Image::PHash->new('file2.jpg');
my $p2 = $iph2->pHash();
my $diff = Image::PHash::diff($p1, $p2);
I made a very simple solution in PHP for comparing images several years ago. It calculates a simple hash for each image, and then finds the difference. It works very nice for cropped or cropped with translation versions of the same image.
First I resize the image to a small size, like 24x24 or 36x36. Then I take each column of pixels and find average R,G,B values for this column.
After each column has its own three numbers, I do two passes: first on odd columns and second on even ones. The first pass sums all the processed cols and then divides by their number ( [1] + [2] + [5] + [N-1] / (N/2) ). The second pass works in another manner: ( [3] - [4] + [6] - [8] ... / (N/2) ).
So now I have two numbers. As I found out experimenting, the first one is a major one: if it's far from the values of another image, they are not similar from the human point of view at all.
So, the first one represents the average brightness of the image (again, you can pay most attention to green channel, then the red one, etc, but the default R->G->B order works just fine). The second number can be compared if the first two are very close, and it in fact represents the overall contrast of the image: if we have some black/white pattern or any contrast scene (lighted buildings in the city at night, for example) and if we are lucky, we will get huge numbers here if out positive members of sum are mostly bright, and negative ones are mostly dark, or vice versa. As I want my values to be always positive, I divide by 2 and shift by 127 here.
I wrote the code in PHP in 2017, and seems I lost the code. But I still have the screenshots:
The same image:
Black & White version:
Cropped version:
Another image, ranslated version:
Same color gamut as 4th, but another scene:
I tuned the difference thresholds so that the results are really nice. But as you can see, this simple algorithm cannot do anything good with simple scene translations.
On a side note I can notice that a modification can be written to make cropped copies from each of two images at 75-80 percent, 4 at the corners or 8 at the corners and middles of the edges, and then by comparing the cropped variants with another whole image just the same way; and if one of them gets a significantly better similarity score, then use its value instead of the default one).

Resources