Photoshop blending mode to OpenGL ES without shaders - opengl-es

I need to imitate Photoshop blending modes ("multiply", "screen" etc.) in my OpenGL ES 1.1 code (without shaders).
There are some docs on how to do this with HLSL:
http://www.nathanm.com/photoshop-blending-math/ (archive)
http://mouaif.wordpress.com/2009/01/05/photoshop-math-with-glsl-shaders/
I need at least working Screen mode.
Are there any implementations on fixed pipeline I may look at?

Most photoshop blend-modes are based upon the Porter-Duff blendmodes.
These requires that all your images (textures, renderbuffer) are in premultiplied color-space. This is usually done by multiplying all pixel-values with the alpha-value before storing them in a texture. E.g. a full transparent pixel will look like black in non-premultiplied color space. If you're unfamiliar with this color-space spend an hour or two reading about it on the web. It's a neat and good concept and required for photoshop-like compositions.
Anyway - once you have your images in that format you can enable SCREEN using:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_COLOR)
The full MULTIPLY mode is not possible with the OpenGL|ES pipeline. If you only work with full opaque pixels you can fake it using:
glBlendFunc(GL_ZERO, GL_SRC_COLOR)
The results for transparent pixels either in your texture and your framebuffer will be wrong though.

you should try this:
glBlendFunc(GL_DST_COLOR, GL_ONE_MINUS_SRC_ALPHA)
This looks like multiplying to me on the iPhone / OpenGL ES

Your best place to start is to pick up a copy of the Red Book and read through the chapters on on materials and blending modes. It has a very comprehensive and clear explanation of how the 'classic' OpenGL blending functions work.

I have found that using this:
glDepthFun( GL_LEQUAL);
was all need to get a screen effect, at least it worked well on my project.
I am not sure why this works, but if someone knows please share.

Related

Creating frosted glass three webgl

I'm having trouble to find how to create a material with the look of frosted glass. I haven't found anything on the web that looks what I want to do.
I've tried a lot of settings for the material.
In this link you can see what I'm trying to get..
Does anybody have an idea how to solve this?
Regards
Rikard
One way I've encountered that worked well for me in the past performed a Blit on the portion of the framebuffer you want frosted with the blur algo or normal pattern of your choice. A stencil mask as part of the glass shader is used to determine which portion should be affected and which should not.
This article has a nice writeup on glass refraction which, when used with a blur will give a good effect.
https://beclamide.medium.com/advanced-realtime-glass-refraction-simulation-with-webgl-71bdce7ab825
I know It's not WebGL per se, but I've used the below Unity frosted glass shader before, to great effect. You may be able to extract the pertinent pieces from it and use that knowledge to assemble a WebGL version. https://github.com/andydbc/unity-frosted-glass
I'm about to undertake this myself, and will update this answer with actual code 'if' I succeed.

Applying post-effect / pixel shader to Windows

I am color blind. This is usually not a problem for me, but whenever a game or application uses red and green as contrast colors it can become an annoyance. Looking for accessibility tools online, all I've managed to find are tools that adjust colors on snapshots, or on a camera input. For this reason, I've been toying with the idea of writing my own color adjustment tool.
Suppose I would want to write my own shader or post effect that shifts or swaps color values, then apply it to everything I see in Windows (10) in real-time - is there a way to do this? For example, is it possible to insert a pixel shader into the Windows rendering pipeline? Or would it be possible to write an application that takes the entire screen as input and outputs something else, faster than say 5ms per frame using a modern GPU? How would you approach this?

With ImageMagick, why do I get a halo when flattening a PSD with alpha?

I have a large number of PSD files which contain semi-transparent layers. These layers are not getting flattened correctly regardless of what flags I use via convert or mogrify
The simplest form looks as follows:
convert -background transparent source.psd -flatten output.png
Here is what the source image looks like in Photoshop. Note that this is a drop shadow layer and not a layer effect:
Here is how it comes out:
This may not be obvious from the photoshop background, so here it is in laid over a grey background:
Source:
Output:
EDIT:
I dug a bit into what is happening in the numbers. For the initial source image, the shadow is completely black and the alpha fades in. For the output image, the alpha is not as high, but it compensates by inaccurately lightening the image in a somewhat bumpy fashion. Its almost as if its pre-multiplied, but its taking the background as white?
Here is a strait RGB render without alpha multiplied in:
Source:
Output:
In other words, the RBG values are not at all being preserved. Alpha is being dimmed, but not distorted as theses values are. My guess would be some sort of rounding error based on trying to extrapolate the color from the alpha as though it is trying to "unpre-multiply" the values. Any help is appreciated.
Short answer is it is fixed in V7 of the software (I think). I run mac and the installer for V7 doesn't work well at all and it appears unstable. After running it on an Ubuntu VM, it works good. I have also confirmed with another user that V6 has this problem and V7 does not on Windows

what are the image formats that can be used to create a texture in opengles?

I had recently starting learning openGLES and right now I had gone through some tutorials,so when I had come across the texture mapping concept am having a doubt that what are all the image formats we can use to create a texture.
Most image formats need conversion before they can be used as textures. There are libraries (e.g., DevIL) to handle that though. Without something to handle the conversion, the answer is basically "none". With DevIL, the answer becomes "nearly anything."

Is StretchBlt HALFTONE == BILINEAR for all scaling?

Can anyone clarify if the GDI StretchBlt function for the workstation Win32 API performs bilinear interpolation for scaling to both larger and smaller images for 24/32-bit color images? And if not, is there a GDI (not GDI+) function that does this?
The SetStretchBltMode fn has a setting HALFTONE which is documented as follows:
HALFTONE
Maps pixels from the source rectangle into blocks of pixels in the destination rectangle. The average color over the destination block of pixels approximates the color of the source pixels.
I've seen references (see follow-up to first answer) that this performs bilinear interpolation when scaling down an image, but no clear answer of what happens when scaling up.
I have noticed that the Windows Mobile CE SDK does support a BILINEAR flag - which is documented exactly opposite of the HALFTONE comments (only works for scaling up).
Note that for the scope of this question, I'm not interested in pursuing GDI+ (which has numerous interpolation options), OpenGL, DirectX, etc. as alternatives, so please don't bother with follow-ups regarding these other APIs or alternate image libraries.
What I'm really hoping to find is some definitive MS/MSDN or other high-quality documentation that clearly documents this behavior of the Win32 (desktop) GDI behavior.
Meanwhile, I'll try some experiments comparing GDI vs. Direct2D (which does have an explicit flag to control this) and post my findings.
Thanks!
I've been looking into this same problem for the past couple of weeks.
As far as I can tell, there does not exist any definitive documentation on this behaviour from Microsoft.
However, I've run some tests myself, to try and establish the degree to which StretchBlt can be trusted to perform consistently with respect to up- and down-scaling images in halftone mode.
My findings are:
1) StretchBlt does produce adequate quality up- and down-scaled images. It might be a touch below Photoshop quality, but probably OK for most practical purposes.
2) It seems to depend upon hardware acceleration, whenever it's available. I haven't been able to confirm this, but I have a slight fear that this may lead to different outputs on different types of hardware. However, on the 5 or 6 different systems I've tried it on, old and new, the performance has been consistent and fast.
3) If you use the call on a 16-bit color device, or lower, StretchBlt will automatically dither your image. If you run it on a 24-bit color device, it will not dither.
4) If you use it to scale small images (smaller than 150x150px), it will randomly fall back to nearest neighbour interpolation. This can be remedied in your own software, by padding the bitmap before scaling, doing StretchBlt on it, and then removing the padding afterwards. Kind of a hack, but it works.
HALFTONE mode performs a very blocky halftone dithering on the image, based on varying the conversion thresholds over a defined square. I have never seen a situation where it would be considered the best choice.
COLORONCOLOR is the best mode for color images, but as you've seen it doesn't give great results.
GDI does not support a bilinear mode (except in Windows Mobile CE as you discovered). The naive implementation of bilinear does not do very well when shrinking an image, as it simply tries to interpolate between two adjacent input pixels without trying to draw from a larger area.

Resources