I want to print a qr code with its middle portion blacked out and then print variable data on the black square (which would be none of QR code's business).
How can I achieve that? One way could be while generating QR, I define my timing pattern or some configuration to be fixed as this centered black square.
I'll be using my own app to decode it, so I would know the configuration while decoding as well.
The concept of a 2D barcode with a "free" center area is certainly plausible--Snapcodes are one example, and Denso Wave (who invented QR) have a proprietary form called Frame QR.
That said, what you create should not be called a QR code, and decoders will not be compatible. The ones you see in the wild with artistic centers are using error correction to achieve the effect.
Related
Can someone tell me how (or the name of it, so that I could look it up) I can implement this interpolation effect? https://www.youtube.com/watch?v=36lE9tV9vm0&t=3010s&frags=pl%2Cwn
I tried to use r = r+dr, g = g+dr and b = b+db for the RGB values in each iteration, but it looks way too simple compared to the effect from the video.
"Can someone tell me how I can implement this interpolation effect?
(or the name of it, so that I could look it up)..."
It's not actually a named interpolation effect. It appears to interpolate but really it's just realtime updated variations of some fictional facial "features" (the hair, eyes, nose, etc are synthesized pixels taking hints from a library/database of possible matching feature types).
For this technique they used Neural Networks to do a process similar to DFT Image Reconstruction. You'll be modifying the image data in Frequency domain (with u,v), not Time domain (using x,y).
You can read about it at this PDF: https://research.nvidia.com/sites/default/files/pubs/2017-10_Progressive-Growing-of/karras2018iclr-paper.pdf
The (Python) source code:
https://github.com/tkarras/progressive_growing_of_gans
For ideas, on Youtube you can look up:
DFT image reconstruction (there's a good example with b/w Nicholas Cage photo reconstructed in stages. Loud music warning).
Image Synthesis with neural networks (one clip had salternative shoe and hand-bag designs (item photos) being "synthesized" by an N.N. after it analyzed features from other existing catalogue photos as "inspiration".
Image Enhancement Super Resolution using neural networks This method is closest to answering your question. One example has very low-res blurry pixelated image in b/w. Cannot tell if boy or girl. During a test, The network synthesizes various higher quality face images that it thinks is the correct match for the testing input.
After understanding what/how they're achieve it, you could think of shortcuts to get similar effect without needing networks eg: only using regular pixel editing functions.
Found it in another video, it is called "latent space interpolation", it has to be applied on the compressed images. If I have image A and the next image is image B, I have first to encode A and B, use the interpolation on the encoded data and finally decode the resulted image.
As of today, I found out that this kind of interpolation effect can be easily implemented for 3d image data. That is if the image data is available in a normalized and at 3d origin centred way, like for example in a unit sphere around the origin and the data of each faceimage is inside that unit sphere. Having the data of two images stored this way the interpolation can be calculated by taking the differences of rays going through the origin center and through each area of the sphere at some desired resolution.
I'd like to program a detection of a rectangular sheet of paper which doesn't absolutely need to be perfectly straight on each side as I may take a picture of it "in the air" which means the single sides of the paper might get distorted a bit.
The app (iOs and android) CamScanner does this very very good and Im wondering how this might be implemented. First of all I thought of doing:
smoothing / noise reduction
Edge detection (canny etc) OR thresholding (global / adaptive)
Hough Transformation
Detecting lines (only vertically / horizontally allowed)
Calculate the intercept point of 4 found lines
But this gives me much problems with different types of images.
And I'm wondering if there's maybe a better approach in directly detecting a rectangular-like shape in an image and if so, if maybe camscanner does implement it like this as well!?
Here are some images taken in CamScanner.
These ones are detected quite nicely even though in a) the side is distorted (but the corner still gets shown in the overlay but doesnt really fit the corner of the white paper) and in b) the background is pretty close to the actual paper but it still gets recognized correctly:
It even gets the rotated pictures correctly:
And when Im inserting some testing errors, it fails but at least detects some of the contour, but always try to detect it as a rectangle:
And here it fails completely:
I suppose in the last three examples, if it would do hough transformation, it could have detected at least two of the four sides of the rectangle.
Any ideas and tips?
Thanks a lot in advance
OpenCV framework may help your problem. Also, you can look to this document for the Android platform.
The full source code is available on Github.
Without implementing openCV or calling QR code's recognition API, is there any quick and reliable algorithm to determine the existence of a QR code in an image?
The intention of this question is to improve the user experience of scanning QR code. When QR code's recognition fails, the program needs to know whether there really exists a QR code for it to scan and recognize QR code again or there is not any QR code so that the program can call other procedures.
To echo some response, the detection program doesn't need to be 100% accurate but returns an accurate result with reasonable probability. If we can use openCV here, Fourier Transformation will be easily implemented to detect whether there is an obvious high frequency in an image, which is a good sign of the existence of QR. But the integration of openCV will largely increase the size of my program, which I want to avoid.
It's great that you want to provide feedback to a user. Providing graphics that indicate the user is "getting warmer" in finding the QR code can make the process of finding and reading a code quicker and smoother.
It looks like you already have your answer, but to provide a more robust solution and/or have options, you might try one or more of the following:
Use N iterations to morph dark pixels closed, and the resulting squarish checkboard pattern should more closely resemble a filled square. This was part of a detection method I used to determine if a DataMatrix (a similar 2D code) was present whether it was readable or not. Whether this works will depend greatly on your background.
Before applying FFT, considering finding the affine transform to reduce perspective distortion. Analyzing FFT data can be a pain if the frequencies have a bit of spread because of foreshortening.
You could get some decent results using texture measures such as Local Binary Patterns (LBPs) or older techniques such as Law's Texture methods. You might even get lucky and be able to detect slight differences in the histogram of texture measures between a 2D code and a checkerboard pattern.
In regions of checkerboard-like patterns, look for the 3 guide features at the corners of the QR code. You could try SIFT/SURF-like methods, or perhaps implement a simpler match method by using a limited number of correlation templates that are tested in scale space.
Speaking of scale space: generate an image pyramid to save yourself the trouble of searching for squares in full-resolution images. You could try edge-preserving or non-edge-preserving methods to generate the smaller images in the pyramid, or perhaps a combination of both.
If you have code for fast kernel processing, you might try a corner detection method to reduce the amount of data you process to detect checkerboard-like patterns.
Look for clear bimodal distributions of grayscale values in squarish regions. 2D codes on paper labels tend to have stark contrast even though 2D codes on paper are quite readable at low contrast.
Rather than look for bimodal distribution of grayscale values, you could look for regions where gradient magnitudes are very consistent, nearly unimodal.
If you know the min/max area limits of a readable QR code, you could probabilistically sample the image for patches that match one or more of the above criteria: one mode of gradient magnitudes, nearly evenly space corner points, etc. If a patch does look promising, then jump to another random position with the caveat that the new patch was not previously found unpromising.
If you have the memory for an image pyramid, then working with reduced resolution images could be advantageous since you could try a number of tests fairly quickly.
As far as user interaction is concerned, you might also update the "this might be a QR code" graphic multiple times during pre-processing, and indicate degrees of confidence with progressively stronger/greener graphics (or whatever color is appropriate for the local culture). For example, if a patch of texture has a roughly 60% chance of being a QR code, you might display a thin yellowish-green rectangle with a dashed border. For an 80% - 90% likelihood you might display a solid rectangle of a more saturated green color. If you can update the graphics about every 100 - 200 milliseconds then a user will have some idea that some action such as moving the smart phone is helping or hurting.
1) convert the image into grayscale
2) divide the image into cells of n x m, say 3 x 3. This procedure intends to guarantee that at the least one cell will be fully covered by possible QR code if any
3) implement 2D Fourier Transformation for all the cells. If in any cell there is an significantly large value in high-frequency area in both X and Y axis, there is a high likelihood that there exists a QR code
I am addressing a probability issue rather than 100% accurate detection. In this algorithm, chessboard will be detected as QR code as well.
I am working on a Kinect game where I am supposed "to dress" the player into a kind of garment.
As the player should always stand directly in front of the device, I am using a simple jpg file for this "dressing".
My problem starts when the user, while still standing in the frontal position, bends the knees or leans right or left. I want to apply an appropriate transform to this "dress" image so that it still will cover player's body more or less correctly.
From Kinect sensors I can get a current information about the following player's body parts positions:
Is there any library (C++, C#, Java) or a known algorithm that can make such transformation?
Complex task but possible.
I would split the 'dress' into arms, torso/upper body and lower. you could then use (from memory) AffineTransform in java though most languages have algorithms for matrix transforms against images.
The reason I suggest splitting the image is that when you do a transform you will be distorting the top part of the image and it will allow you to do some rotation (for when people lean) and wrap the arms as they move also.
EDIT:
I would also NOT transform each frame (cpu intensive) I would create a rainbow table of the possible angles and do a lookup for the image
If someone scans their right hand pressed against the glass of a scanner, the result would look like this:
(without the orange and white annotations). How could we determine someone's 2D:4D ratio from an image of their hand?
You've already tagged this opencv which is great - I'd highly recommend taking a look at openFrameworks and the openCV addon, as the basic examples there will give you some great starting blocks for this.
The general approach to this I would take is to first distill the image to light and dark areas, detect the edges of the hand and fingers, and then simplify your data until you have lines representing the edges and tips of the fingers. Finally, take the lower inseam between 2nd and 3rd finger, stopping at the tip of the 2nd, and the inseam of the 3rd and 4th, stopping at the tip of the 4th, which should give you your 2D:4D ratio.
First, you'll need to process your images to get to black and white images openCV can easily handle. You may have to play with various thresholds to get both the outline of the hand and the inseams of the fingers to be detected. (You may even need two passes to detect both the outline and inseams)
While there are many approaches to feature detection, OpenCV will generally return arrays of "blobs" detected. With the right thresholds, I believe you would be able to reliably and simply find contiguous horizontal blobs (or nearly contiguous, allowing for some distance between nearby blobs) for the inseams of each finger.
A simple algorithm for detecting the inseams would be to walk through the detected blobs starting from the top left and proceeding left-to-right through the image, as if reading a page. Assemble an array of detected horizontal lines from the blobs in your image, and play with various image processing thresholds, minimum accepted line length, and distance allowances between detected blobs which you still consider part of the same line until you're satisfied you're detecting the finger edges well.
Once you have detected the horizontal lines, you can process the blobs again, looking for the vertical lines that represent the tips of the fingers (stopping when you hit the previously detected horizontal lines)
Finally, find the lines which represent the correct inseams, measure them until they intersect with the appropriate fingertips, and you should have your ratio!
Interesting question. I'd go about it this way:
First, binarize the image by Otsu's thresholding. Then find the skeleton of the image using a Medial-Axis Transform (MAT). This would mean doing a distance transform on the image, then using adaptive thresholding to get the local maxima in the distance transform. This gives a rough and ready skeleton of your image. Sample code from here.
The obtained hand-skeleton may be slightly disconnected, in which case use the OpenCV morphology "CLOSE" (not "open") function can connect it into a single skeleton. Then checking convexity defects of the resulting hand should give an estimate.