Is there anything in imagemagick or gimp or other linux compatible tool what can automatically detect individual objects on image and either return some location of object or store each object as separate image?
I have image like this one:
For other images where objects are located on a grid, I have successfully used crop operator in imagemagick, e.g. for 3x3 grid:
convert -crop 3x3# in-image.jpg out-image-%d.jpg
I cannot use crop when there is no rectangular grid, but I thought that white color should be enough for objects separation.
I would tackle this with a "Connected Components Analysis", or "Image Segmentation" approach, like this...
First, split the input image into components, specifying a minimum size (in order to remove smaller lumps) and allowing for 8-connectivity (i.e. the 8 neighbouring pixels N, NE, E, SE, S, SW, W, NW are considered neighbours) rather than 4-connectivity - which only considers N, E, S and W pixels connected.
convert http://i.stack.imgur.com/T2VEJ.jpg -threshold 98% \
-morphology dilate octagon \
-define connected-components:area-threshold=800 \
-define connected-components:verbose=true \
-connected-components 8 -auto-level PNG8:lumps.png
which gives this output:
Objects (id: bounding-box centroid area mean-color):
0: 450x450+0+0 219.2,222.0 93240 srgb(255,255,255)
14: 127x98+111+158 173.0,209.4 9295 srgb(0,0,0)
29: 105x91+331+303 384.1,346.9 6205 srgb(0,0,0)
8: 99x75+340+85 388.9,124.6 5817 srgb(1,1,1)
15: 110x69+330+168 385.4,204.9 5640 srgb(1,1,1)
3: 114x62+212+12 270.0,42.4 5021 srgb(0,0,0)
4: 103x63+335+12 388.9,44.9 4783 srgb(0,0,0)
11: 99x61+13+134 61.5,159.1 4181 srgb(0,0,0)
37: 128x52+313+388 375.1,418.4 4058 srgb(0,0,0)
24: 95x62+24+256 69.6,285.7 4017 srgb(0,0,0)
2: 91x68+15+12 62.0,44.4 3965 srgb(0,0,0)
38: 91x50+10+391 55.1,417.0 3884 srgb(0,0,0)
12: 83x64+249+134 288.3,168.4 3761 srgb(0,0,0)
19: 119x62+320+240 385.4,268.4 3695 srgb(9,9,9)
25: 93x63+128+268 176.1,302.1 3612 srgb(0,0,0)
39: 96x49+111+391 158.1,416.0 3610 srgb(0,0,0)
31: 104x59+117+333 172.9,360.1 3493 srgb(0,0,0)
33: 88x55+238+335 279.3,364.5 3440 srgb(0,0,0)
26: 121x54+230+271 287.6,294.0 3431 srgb(8,8,8)
1: 98x61+109+11 159.7,40.0 3355 srgb(0,0,0)
40: 88x42+218+399 262.3,419.7 3321 srgb(0,0,0)
6: 87x61+115+70 157.9,100.1 3263 srgb(0,0,0)
30: 97x57+14+327 57.3,357.2 3237 srgb(55,55,55)
17: 84x57+13+207 53.1,232.2 2995 srgb(0,0,0)
5: 107x58+10+68 58.9,97.5 2988 srgb(0,0,0)
18: 77x60+237+212 273.0,243.0 2862 srgb(0,0,0)
7: 87x49+249+78 291.8,99.3 2703 srgb(9,9,9)
10: 82x51+178+109 222.8,133.9 2628 srgb(0,0,0)
Each line corresponds to a separate component, or segment, and shows the widths and heights of the bounding boxes for each component, and their offsets from the top-left corner. You can parse that easily enough with awk and draw the indicated red boxes onto the image to give this:
The output image is called lumps.png and it looks like this:
and you can see that each component (piece of meat) has a different grey level associated with it. You can also analyse lumps.png, and extract a separate mask for each piece of meat, like this:
#!/bin/bash
# Extract every grey level, and montage together all that are not entirely black
rm mask_*png 2> /dev/null
mask=0
for v in {1..255}; do
((l=v*255))
((h=l+255))
mean=$(convert lumps.png -black-threshold "$l" -white-threshold "$h" -fill black -opaque white -threshold 1 -verbose info: | grep -c "mean: 0 ")
if [ "$mean" -eq 0 ]; then
convert lumps.png -black-threshold "$l" -white-threshold "$h" -fill black -opaque white -threshold 1 mask_$mask.png
((mask++))
fi
done
That gives us masks like this:
and this
we can see them all together if we do this
montage -tile 4x mask_* montage_masks.png
If we now apply each of the masks to the input image as the opacity and trim the resulting image, we will be left with the individual lumps of meat
seg=0
rm segment_*png 2> /dev/null
for f in mask_*png; do
convert http://i.stack.imgur.com/T2VEJ.jpg $f -compose copy-opacity -composite -trim +repage segment_$seg.png
((seg++))
done
And they will look like this:
and this
Or, we can put them all together like this:
montage -background white -tile 4x segment_* montage_results.png
Cool question :-)
It can be done with ImageMagick in multiple steps. The orignal image is named meat.jpg:
convert meat.jpg -threshold 98% -morphology Dilate Octagon meat_0.png
convert meat_0.png text: | grep -m 1 black
This gives you a pixel location in the area of the first part of meat:
131,11: ( 0, 0, 0) #000000 black
We'll use this to color the first piece in red, separate the red channel and then create and apply the mask for the first piece:
convert meat_0.png -fill red -bordercolor white \
-draw 'color 131,11 filltoborder' meat_1_red.png
convert meat_1_red.png -channel R -separate meat_1.png
convert meat_1_red.png meat_1.png -compose subtract \
-threshold 50% -composite -morphology Dilate Octagon \
-negate meat_1_mask.png
convert meat_1_mask.png meat.jpg -compose Screen -composite \
-trim meat_1.jpg
The resulting meat_1.jpg is already trimmed. You can then proceed the same way with meat_1.png in stead of meat_0.png, generating meat_2.png as the basis for successive iterations on the fly. Maybe this can be further simplified and wrapped in a shell script.
With GIMP, you can do it the following way:
Use the Magic Wand (Select Contiguous Regions tool) and select your background;
Menu Select > Invert to get multiple selections;
Install the script-fu export-selected-regions below if you don't have it yet;
Menu Select > Export Selected Regions, your new images should be in the same folder as the original one.
If you're using GIMP 2.10 on mac, put the script below in: ~/Library/Application\ Support/GIMP/2.10/scripts/export-selected-regions.scm, check for the appropriate folder for other system.
;;; Non-interactively save all selected regions as separate files
(define (script-fu-export-selected-regions image drawable)
;; Start
(gimp-image-undo-group-start image)
;; If there are selections
(when (= 0 (car (gimp-selection-is-empty image)))
(let ((number 1) (prefix "") (suffix ""))
;; Construct filename components
(let* ((parts (strbreakup (car (gimp-image-get-filename image)) "."))
(coextension (unbreakupstr (reverse (cdr (reverse parts))) "."))
(extension (cadr parts)))
(set! prefix (string-append coextension "_selection-" ))
(set! suffix (string-append "." extension)))
;; Convert all selections to a single path
(plug-in-sel2path RUN-NONINTERACTIVE image drawable)
;; For each stroke in the path
(let ((vectors (vector-ref (cadr (gimp-image-get-vectors image)) 0)))
(for-each (lambda (stroke)
;; Convert the stroke back into a selection
(let ((buffer (car (gimp-vectors-new image "buffer")))
(points (gimp-vectors-stroke-get-points vectors stroke)))
(gimp-image-insert-vectors image buffer 0 -1)
(apply gimp-vectors-stroke-new-from-points buffer points)
(gimp-vectors-to-selection buffer 2 TRUE FALSE 0 0)
(gimp-image-remove-vectors image buffer))
;; Replace the selection with its bounding box
(apply (lambda (x0 y0 x1 y1)
(gimp-image-select-rectangle image 2 x0 y0 (- x1 x0) (- y1 y0)))
(cdr (gimp-selection-bounds image)))
;; Extract and save the contents as a new file
(gimp-edit-copy drawable)
(let* ((image (car (gimp-edit-paste-as-new)))
(drawable (car (gimp-image-get-active-layer image)))
(filename ""))
(while (or (equal? "" filename) (file-exists? filename))
(let* ((digits (number->string number))
(zeros (substring "0000" (string-length digits))))
(set! filename (string-append prefix zeros digits suffix)))
(set! number (+ number 1)))
(gimp-file-save RUN-NONINTERACTIVE image drawable filename filename)
(gimp-image-delete image)))
(vector->list (cadr (gimp-vectors-get-strokes vectors))))
(gimp-image-remove-vectors image vectors))))
;; End
(gimp-selection-none image)
(gimp-image-undo-group-end image))
(script-fu-register "script-fu-export-selected-regions"
"Export Selected Regions"
"Export each selected region to a separate file."
"Andrew Kvalheim <Andrew#Kvalhe.im>"
"Andrew Kvalheim <Andrew#Kvalhe.im>"
"2012"
"RGB* GRAY* INDEXED*"
SF-IMAGE "Image" 0
SF-DRAWABLE "Drawable" 0)
(script-fu-menu-register "script-fu-export-selected-regions" "<Image>/Select")
Many thanks to Andrew Kvalheim for his script-fu.
Related
I have an image:
In this image, the OpenCV Hough transform can't detect the big -45 degree line using
minLineLength = 150
maxLineGap = 5
line_thr = 150
linesP = cv.HoughLinesP(dst, 1, np.pi / 180, line_thr, None, minLineLength, maxLineGap)
The only lines found are:
I tried playing with various thresholds also but I can't find the line here.
If I manually crop the image like this:
then I can clearly see the OpenCV Hough transform finding the right line:
I want to find this same line in the non cropped version. Any suggestions on the non-cropped version to find it?
Also there can be cases where there is no line at all or the line doesn't go all the way for X-axis length.
Examples
I implemented a slightly simpler algorithm than my other answer but in Python with OpenCV this time.
Basically, rather than taking the mean of vertical columns of pixels, it sums the pixels in the columns and chooses the column that is brightest. If I show the padded, rotated image with another image below representing the sums of the columns, you should see how it works:
#!/usr/bin/env python3
import cv2
import numpy as np
# Load image as greyscale
im = cv2.imread('45.jpg',cv2.IMREAD_GRAYSCALE)
# Pad with border so it isn't cropped when rotated
bw=300
bordered = cv2.copyMakeBorder(im, top=bw, bottom=bw, left=bw, right=bw, borderType= cv2.BORDER_CONSTANT)
# Rotate -45 degrees
w, h = bordered.shape
M = cv2.getRotationMatrix2D((h/2,w/2),-45,1)
paddedrotated = cv2.warpAffine(bordered,M,(h,w))
# DEBUG cv2.imwrite('1.tif',paddedrotated)
# Sum the elements of each column and find column with most white pixels
colsum = np.sum(paddedrotated,axis=0,dtype=np.float)
col = np.argmax(colsum)
# DEBUG cv2.imwrite('2.tif',colsum)
# Fill with black except for the line we have located which we make white
paddedrotated[:,:] = 0
paddedrotated[:,col] = 255
# Rotate back to straight
w, h = paddedrotated.shape
M = cv2.getRotationMatrix2D((h/2,w/2),45,1)
straight = cv2.warpAffine(paddedrotated,M,(h,w))
# Remove padding and save to disk
straight = straight[bw:-bw,bw:-bw]
cv2.imwrite('result.png',straight)
Note that you don't actually have to rotate the image back to straight and crop it back to its original size. You could actually stop after the first line that says:
col = np.argmax(colsum)
and use some elementary trigonometry to work out what that means in your original image.
Here is the output:
Keywords: line detection, detect line, rotate, pad, border, projection, project, image, image processing, Python, OpenCV, affine, Hough
I did this on the command-line in Terminal with ImageMagick but you can apply exactly the same technique with OpenCV.
Step 1
Take the image and rotate it 45 degrees introducing black pixels as background where required:
convert 45.jpg -background black -rotate 45 result.png
Step 2
Now, building on the previous command, set every pixel to the median of the box 1px wide and 250px tall centred on it:
convert 45.jpg -background black -rotate 45 -statistic median 1x250 result.png
Step 3
Now, again building on the previous command, rotate it back 45 degrees:
convert 45.jpg -background black -rotate 45 -statistic median 1x250 -rotate -45 result.png
So, in summary, the entire processing is:
convert input.jpg -background black -rotate 45 -statistic median 1x250 -rotate -45 result.png
Obviously then crop it back to the original size and append side-by-side with the original for checking:
convert 45.jpg -background black -rotate 45 -statistic median 5x250 -rotate -45 +repage -gravity center -crop 184x866+0+0 result.png
convert 45.jpg result.png +append result.png
You can also use mean statistic plus thresholding rather than median since it is quicker than sorting to find the median, however it tends to lead to smearing:
convert 45.jpg -background black -rotate 45 -statistic mean 1x250 result.png
Your newly-added image gets processed to this result:
The problem is clearly that the line you are searching for is not a line. It looks actually like a train of connected circles and boxes. Therefore, I recommend that you do the following:
Find all contours in the image using find contours
img = cv.imread('image.jpg')
img_gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
ret, thresh = cv.threshold(img_gray, 127, 255, 0)
img2, contours, hierarchy = cv.findContours(thresh, CHAIN_APPROX_SIMPLE ,cv.RETR_EXTERNAL)
This will return many many contours, so use a loop to save only long enough contours. Since the image size is 814x1041 pixels, I assume the contour long if it is at least 10% of the image width which is almost 100 (you must apparently optimize this value)
long_contours = []
for contour in contours[i]:
perimeter = cv2.arcLength(contour,True)
if (perimeter > 0.1 * 1018) # 10% of the image width
long_contours.append(contour)
Now draw a rotated bounding rectangle around those long contours that might be a line as well. The long contour is considered a line if its width is much longer than its height, or its aspect ratio is large (such as 8, and you need also to optimize this value)
for long_contour in long_contours:
rect = cv2.minAreaRect(long_contour)
aspec_ratio = rect.width / rect.height
if aspec_ratio > 8 :
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(img,[box],0,(255,255,255),cv.FILLED)
Finally you should get something like that. Please note the code here is for guidance only.
Your original code is fine as a whistle. The only problem is that your image contains too many information which mess up the accumulator scores. Everything will work out if you increase the line threshold to 255.
minLineLength = 150
maxLineGap = 5
line_thr = 255
linesP = cv2.HoughLinesP(dst, 1, np.pi / 180.0, line_thr, None, minLineLength, maxLineGap)
Here are the results using that value.
3 lines are detected here due to the large white pixel size.
[ 1 41 286 326]
[ 0 42 208 250]
[ 1 42 286 327]
5 lines are detected around the same area due to the same reason as above. Reducing the pixel sizes using morphological operation or distance transform should fix this.
[110 392 121 598]
[112 393 119 544]
[141 567 147 416]
[ 29 263 29 112]
[ 0 93 179 272]
No line found here.
I have a black and white image which has to be rendered on screen as grayscale image of precise colours. Black should be displayed as rgb(40,40,40) and white as rgb(128,128,128).
The problem is the software to render this image does not allow colours to be specified directly; the only parameters I can vary are brightness, contrast and gamma (converting image to the desired colours is not an option).
Is there any formulae to calculate specific values for those parameters to adjust colours as desribed?
Without knowing how they compute brightness and contrast, it is hard to tell you how to do your computation.
Perhaps I still misunderstand. But you can find the min and max values in your image using Imagemagick
convert image -format %[fx:255*minima] info:
convert image -format %[fx:255*maxima] info:
Those will be in the range of 0 to 255.
As Mark showed above the transformation is linear. So it obeys the equation
Y = a*X + b
where a is a measure of contrast and b is a measure of brightness; X is your input value and Y is your desired output value.
Thus
Ymax = a*Xmax + b
and
Ymin = a*Xmin + b
Subtracting and solving for a, we get
a = (Ymax-Ymin)/(Xmax-Xmin)
and substituting that into the equation for Ymax and saving for b, we get
b = Ymax - a*Xmax = Ymax - ( (Ymax-Ymin)/(Xmax-Xmin) )*Xmax
Then you can use the Imagemagick function -function polynomial to process your image.
In unix, I would do it as follows
Xmin=$(convert image -format %[fx:255*minima] info:)
Xmax=$(convert image -format %[fx:255*maxima] info:)
If your image is pure black and pure white, then you can skip the above and just use
Xmin=0
Xmax=255
And your desired values are
Ymin=40
Ymax=128
These are now variables and I can use -fx to do the calculations for a and b
a = $(convert xc: -format "%[fx:($Ymax-$Ymin)/($Xmax-$Xmin)]" info:)
b = $(convert xc: -format "%[fx:$Ymax - $a*$Xmax]" info:)
And to convert your image,
convert image -function polynomial "$a,$b" result image
In general, there are several ways to alter an image's contrast, gamma and brightness and it is difficult to know which method your chosen tool uses, and therefore provide the correct answer.
What you are trying to do is move the blue line (no contrast or brightness changes) in the image below to where the red line (decreased contrast) is:
In general, decreasing the contrast will rotate the blue line clockwise whereas increasing it will rotate it anti-clockwise. In general, increasing the brightness will shift the blue line to the right whereas decreasing the brightness will shift it left. Changing the gamma will likely make the line into a curve.
Can you use ImageMagick at the commandline instead?
convert input.png +level 15.69%,50.2% -depth 8 result.png
If you have v7+, use magick in place of convert.
I made a little gradient for you with:
convert -size 60x255 gradient: -rotate 90 gradient.png
And if you apply the suggested command:
convert gradient.png +level 15.69%,50.2% -depth 8 result.png
You will get this:
And you can check the statistics (min and max) with:
identify -verbose result.png | more
Image: result.png
Format: PNG (Portable Network Graphics)
Mime type: image/png
Class: PseudoClass
Geometry: 255x60+0+0
Units: Undefined
Type: Grayscale
Base type: Palette
Endianess: Undefined
Colorspace: Gray
Depth: 8-bit
Channel depth:
Gray: 8-bit
Channel statistics:
Pixels: 15300
Gray:
min: 40 (0.156863) <--- MIN looks good
max: 128 (0.501961) <--- MAX looks good
mean: 84.0078 (0.329443)
standard deviation: 25.5119 (0.100047)
kurtosis: -1.19879
skewness: 0.000197702
I'm using the Racket GUI to write text in the window of my program.
Until now I only needed to draw text horizontally. But now I would also want to write text vertically. I saw in the documentation we can give an "angle" argument when we send the message "draw-text" to the drawing context.
Here's my little function to draw text :
(define (draw-text text fontsize x y color [rotate-angle 0.0])
(when (string? color)
(set! color (send the-color-database find-color color)))
(send bitmap-dc set-font (make-object font% fontsize 'default))
(send bitmap-dc set-text-foreground color)
(send bitmap-dc draw-text text x y [angle rotate-angle])
(update-callback))
But when I call the "draw-text" procedure with example given an angle of 90° (so that the text would be vertically) it doesn't change anything.
It's just displayed as before, horizontally.
Does someone know what's wrong?
It's not clear from the example, but did you remember to convert the 90 degrees into radians? The convention is that 360 degrees is the same as 2pi radians. Or dividing by 360, we get that 1 degree is 2pi/360 radians.
Multiplying by 90, the result is that 90 degrees is 90*2*pi/360 = 180pi/260 = pi/2 ~ 1.5707963267948966. That is, to rotate the text 90 degrees, use 1.5707963267948966 as the rotate-angle.
Also (send bitmap-dc draw-text text x y [angle rotate-angle])
should be
(send bitmap-dc draw-text text x y combine? offset? angle])
For example:
(send bitmap-dc draw-text "A text" 100 100 #t 0 1.570])
I my application users are able to upload photos. Sometime, I want them to hide some information of the picture, for instance the registration plate of a vehicle, or the personal address of an invoice.
To meet that need I plan to pixelate a portion of the image. How can I pixelate an image in such a way given the coordinates of the area to hide and the size of the area.
I found out how to pixelate (by scaling the image down and up) but how can I only target an area of the image?
The area is specified by two pairs of coordinates (x1, y1, x2, y2), or a pair of coordinates and dimensions (x, y, width, height).
I am at work at the moment so can not test any code. I would see if you could work with -region or else use a mask.
Copy the image and pixelate the whole image create a mask of the area required, cut a hole in the original image with the mask and overlay it over the pixelated image.
You could modify this code ( quite old and could probably be improved on ):
// Get the image size to creat the mask
// This can be done within Imagemagick but as we are using php this is simple.
$size = getimagesize("$input14");
// Create a mask with a round hole
$cmd = " -size {$size[0]}x{$size[1]} xc:none -fill black ".
" -draw \"circle 120,220 120,140\" ";
exec("convert $cmd mask.png");
// Cut out the hole in the top image
$cmd = " -compose Dst_Out mask.png -gravity south $input14 -matte ";
exec("composite $cmd result_dst_out1.png");
// Composite the top image over the bottom image
$cmd = " $input20 result_dst_out1.png -geometry +60-20 -composite ";
exec("convert $cmd output_temp.jpg");
// Crop excess from the image where the bottom image is larger than the top
$cmd = " output_temp.jpg -crop 400x280+60+0 ";
exec("convert $cmd composite_sunflower_hard.jpg ");
// Delete tempory images
unlink("mask.png");
unlink("result_dst_out1.png");
unlink("output_temp.jpg");
Thanks for your answer, Bonzo.
I found a way to achieve what I want with ImageMagick convert command. It's a 3-steps process:
I create a pixelated version of the whole source image.
I then build a mask using the original image (to keep the same size) filled with black (with gamma 0) then I draw blank rectangle where I want unreadable areas.
Then I merge the three images (original, pixelated and mask) in a composite operation.
Here is an example with 2 areas (a et b) pixelated.
convert original.png -scale 10% -scale 1000% pixelated.png
convert original.png -gamma 0 -fill white -draw "rectangle X1a, Y1a, X2a, Y2a" -draw "rectangle X1b, Y1b, X2b, Y2b" mask.png
convert original.png pixelated.png mask.png -composite result.png
It works like a charm. Now I will do it with RMagick.
I am attempting to fill a circle with a series of other images and have those images masked off by the circle. I can see why this isn't working, but I can't come up with a solution as to how to fix it.
My drawing code (using processing) is as follows:
PGraphicsOpenGL pgl = (PGraphicsOpenGL) g; // g may change
// This fixes the overlap issue
gl.glDisable(GL.GL_DEPTH_TEST);
// Turn on the blend mode
gl.glEnable(GL.GL_BLEND);
// Define the blend mode
gl.glBlendFunc(GL.GL_SRC_ALPHA, GL.GL_ONE_MINUS_SRC_ALPHA);
// draw the backgroud
fill(200,200,200);
rect(0, 0, width, height);
// cut out the circle
gl.glBlendFunc(GL.GL_ZERO, GL.GL_ONE_MINUS_SRC_ALPHA);
tint(0,0,0,255);
image(circle, 0, 0);
// draw the circle
gl.glBlendFunc(GL.GL_ONE_MINUS_DST_ALPHA, GL.GL_ONE);
tint(140,0,0,255);
image(circle, 0, 100);
gl.glBlendFunc(GL.GL_ONE_MINUS_DST_ALPHA, GL.GL_ONE);
tint(140,0,140,255);
image(circle, 0, 0);
I have been following the directions at http://bigbucketsoftware.com/2010/10/04/how-to-blend-an-8-bit-slide-to-unlock-widget/ which seem to describe the effect that I want. I have also tried this on iphone with similar results.
Here is what I was expecting to happen, and what happened:
The problem must be with how you treat the transparent region. You could enable GL_ALPHA_TEST.
Or if your pictures stay that simple you can just draw them with triangles.
I can't really help you with the blending code but I have another suggestion that might simplify your drawing logic.
I have used the stencil buffer for something like that. I wanted to draw a disk textured with a linear grating. I didn't want to bother with texture coordinates because an important function was to be able to exactly step through the phases of the grating.
I drew the texture in a big rectangle and afterwards I drew in the stencil a white disk.
http://www.swiftless.com/tutorials/opengl/basic_reflection.html
(let ((cnt 0d0))
(defmethod display ((w window))
;; the complex number z represents amplitude and direction
;; of the grating constant
;; r and psi addresses different points in the back focal plane
;; r=0 will result in z=w0. the system is aligned to illuminate
;; the center of the back focal plane for z=w0.
(let* ((w0 (* 540d0 (exp (complex 0d0 (/ pi 4d0)))))
(r 260d0)
(psi 270d0)
(w (* r (exp (complex 0d0 (* psi (/ pi 180d0))))))
(z (+ w w0)))
(clear-stencil 0)
(clear :color-buffer-bit :stencil-buffer-bit)
(load-identity)
;; http://www.swiftless.com/tutorials/
;; opengl/basic_reflection.html
;; use stencil buffer to cut a disk out of the grating
(color-mask :false :false :false :false)
(depth-mask :false)
(enable :stencil-test)
(stencil-func :always 1 #xffffff)
(stencil-op :replace :replace :replace)
(draw-disk 100d0 (* .5d0 1920) (* .5d0 1080))
;; center on camera 549,365
;; 400 pixels on lcos = 276 pixels on camera (with binning 2)
(color-mask :true :true :true :true)
(depth-mask :false)
(stencil-func :equal 1 #xffffff)
(stencil-op :keep :keep :keep)
;; draw the grating
(disable :depth-test)
(with-pushed-matrix
(translate (* .5 1920) (* .5 1080) 0)
(rotate (* (phase z) 180d0 (/ pi)) 0 0 1)
(translate (* -.5 1920) (* -.5 1080) 0)
(draw *bild*))
(disable :stencil-test)
(enable :depth-test)
(fill-grating *grating* (abs z))
(format t "~a~%" cnt)
(if (< cnt 360d0)
(incf cnt 30d0)
(setf cnt 0d0))
(update *bild*)
(swap-buffers)
(sleep (/ 1d0)) ;; 1 frame per second
(post-redisplay))))