Even if we see the exact same image in a device (e.g. iPad), we perceive it different when the back-light is different. For example if we look at the following two images, they are both same image but the latter one has no back-light (disregard the reflections), and we perceive it different. My question is how can I simulate the effect of no back-light, without actually dimming it but playing with the original image? Maybe applying some kind of semi-transparent black mask?
Full backlight
No backlight
Yes, you can simulate it. Physically it's a very simple effect, and only your eyes make it look like a more complicated illusion.
It's just a combination of two layers:
the photo (backlight)
the reflection (no backlight image)
The reflection simply exists all the time. The backlight image is turned on or off. In terms of implementation these are additive layers (sum of pixel values).
Eyes only perceive backlight on/off as a complete change of the image, because the eyes adjust to overall brightness level of the screen.
If you're implementing that in code:
make sure you use linear light colorspace for processing (remove gamma correction, process pixels, apply gamma correction).
when displaying the image on screen, normalize brightness (since to display the effect on screen you have to have it brighter than the actual real-world effect, and you have lower dynamic range to work with).
Related
I am trying to create an app in which I have a single cloth picture (a women ethnic top, with obviously shadow effect to make it look real) and I want the user to click on a color tab on the right side, so that the color of the top changes, but the shadow effect should remain the same.
I am using libgdx for this. I have created a texture from the input top image and used linear filter to smoothen it. the original top looks like:
The new cloth that I want to place on the top is:
Initially I converted every pixel of the shirt from rgb space to hsv space. I tried 4 methods to achieve the above problem:
I used hue,saturation from new cloth, and brightness(I have to take shadows, they have low brightness) from the shirt. The problem here is the fabric color becomes light due to the low brightness in the top (brightness comes from the top). It produces the below image:
I used hue,saturation from the new cloth, and set a cut off for the brightness from the top. 70% is the cut off. The problem here is that the shadows are not clear as the fabric is dark:
I used z-score to calculate brightness at each pixel. The z score for brightness at every pixel in the blouse is used to adjust the brightness in the cloth. z score for every pixel is made the same, and the brightness for the fabric is calculates. This image is also not looking much acceptable:
I need a help on what approach should I use to achieve the above scenario. Am I on the right track, or I am doing something completely wrong. May be I should remove the outer color to see if the top looks realistic (since the background and top color become same, is this the reason for an unrealistic result?
First of all I'm not really sure if LibGDX is proper and best tool for the things you want to implement. For me it rather looks like a job for a library connected with image processing like OpenCV mobile for example.
The truth is that I haven't used it yet (although I have some experience with OpenCV itself) and the fast googleing didin't return any "official" tutorial how to integrate this with libGDX (although I'm pretty sure it is possible).
Anyway - if you want to use LibGDX to operate on colour like this you have a SpriteBatch'es method
void setColor(Color tint)
that you can use for colorizing the SpriteBatch you are using for rendering things.
Due to default generated project the effect is as you can see:
without tint:
batch.begin();
batch.draw(img, 0, 0);
batch.end();
tint applied:
batch.begin();
batch.setColor(Color.GREEN);
batch.draw(img, 0, 0);
batch.end();
Please also notice that tinting image is naive that means that it will colorize also background so you need to cut the image from it's background and save it in .PNG format.
Take a look at this example:
How can I use an image that whenever I want to make a collision in XNA, it happens only for area of the shape not around of it.
For example when I use below picture, I want collision detection happens only when arrow in shape is touched.
The collision detection happens in the area in this picture
How can I make limitation for area of image only?
What you can do is also to create two rectangles. That makes the overlapping area (the area there the image isn't but the rectangle) a bit smaller. But if you need to do this pixel excact you have to use the recource-expensive per-pixel-collision.
You shouldn't try restricting the image shape, because regardless of your efforts - you will have a rectangle. What you need to do is work with detecting pixel collisions. It is a fairly extensive topic - you can read more about a Windows Phone-specific XNA implementation here.
I am searching for an article or tutorial that explains how one can draw primitive shapes (mainly simple lines) with a (neon) glow effect on them in the graphical output of a computer program. I do not want to do some sophisticated stuff like for example in modern first pirson shooters or alike. I am more in a search for a simple solution, like the lines in that picture: http://tjl.co/blog/wp-content/uploads/2009/05/NeonStripes.jpg -- but of course drawn by a computer program in my case.
The whole thing should run on a modern smart phone, so the hardware is a bit limited.
I do know a bit about OpenGL, but not too much, so unfortunately I am a bit lost here. Did some research on Google ("glow effect algoritm" and similar), but found either highly complex stuff for 3D games, or tutorials for Photoshop & co.
So what I would really need is an in-depth article on that subject, but not on a very advanced level. I hope thats even possible... I have just started with OpenGL, did some minor graphics programming in the past, but I am a long-year programmer now, so I would understand technical papers in general.
Does anyone of you know of such an article/paper/tutorial/anything?
Thanks in advance for all good advices!
Cheers!
Matthias
Its jus a bunch of lines with different brightness/transperency. Basically, if you want a glow effect for 1px line, in a size of 20 pixels, then you draw 41 lines with width of 1 px. The middle line is with your base colour, other lines get colours that gradiently go from base color to 100% transperency (like in your example) or darkest colour variant (if you have black background, no transparency).
That is it. :)
This isn't something I've ever done, but looking at your example, the basic approach I'd use to try and recreate it would be...
Start with an algorithm for drawing a filled shape large enough to include the original shape and the glow. For example, a rectangle becomes a slightly larger rectangle, but with rounded corners. An infinitessimally-wide line becomes a thickened line with semi-circular caps. Subtract out the original shape (and fill the pixels for that normally).
For each pixel in the glow, the colour depends on the shortest distance to any part of the original shape. This normally reduces to the distance to the nearest point on a line (e.g. one edge of a rectangle).
The distance is translated to a colour value using probably Hue-Saturation-Value or a similar colour scheme, as well as reducing alpha (increasing transparency). For neon glows, you probably want constant hue, decreasing brightness, maybe increasing saturation, and decreasing alpha.
Translate the HSV/whatever colour value to RGB for output. See this question.
EDIT - I should probably have said HSL rather than HSV - in HSL, if L is at it's maximum value, the resulting colour is always white. For HSV, that's only true if saturation is also at zero. See http://en.wikipedia.org/wiki/HSL_and_HSV
The real trick is that even on a phone these days, I'd guess you probably should use hardware (shaders) for this - sorry, I don't know how that's done.
The "painters algorithm" overlaying of gradually smaller shapes that others have described here is also a possibility, but (1) possibly slower, depending on implementation issues, and (2) you may need to draw to an off-screen buffer, with some special handling for the alpha channel, then blit back to the screen to handle the transparency correctly - if you need transparency, that is.
EDIT - Silly me. An alternative approach is to apply a blur to your original shape (in greyscale), but instead of writing out the blurred pixels directly, apply the colour-transformation to each blurred pixel value.
A blur is basically a weighted moving average. Technically, a finite impulse response filter is implemented using a convolution, but the maths for that is a tad awkward and if you just want "a blur" of about the right size, draw a grayscale circle of pixels as your "weights" image.
The blur in this case basically replaces the distance-from-shape calculation.
_____________________
| |
----|---------------------|-----> line
|_____________________|
gradient block
Break up your line into small non-overlapping blocks. Use whatever graphics primitive you have to draw a tilted rectangular gradient: the center is at 100% and the outer edge is at 0%.
Don't draw it on the image yet; you want to blend it with the image. Using regular transparency will just make it look like a random pipe or pole or something (unless you draw a white line, and your background is dark).
Here are two choices of blending mode:
color dodge: [blended pixel value] = (1-[overlay's pixel value]) / [bottom pixel value]
linear dodge: [blended pixel value] = max([overlay's pixel value]+[bottom pixel value], 1)
Then draw the line above the glow.
If you want to draw a curved "neon" line, simply draw it as a sequence of superimposed "neon dots" where each "neon dot" is a small circular image with transparency going from 0% at the origin to 100% at the edge of the circle.
I have an image of an object taken in a studio. The image is well lighten from multiple sources and stands on a mate white background. the background is also lighten.
Most of the shadows that fall on the background are eliminated by the lights but there are still very little light shadows that I would like to remove.
Until now, the only solutions I found involved in manual intervention. I would like to know if there are known methods for this or if anybody has an idea how to approach such a problem.
The object can also contain white elements and at this point I can't change the background color (to green or blue).
Thanks.
If you have strong contrast between foreground and background you could use a simple floodfill algorithm that stops on hitting a large contrast difference to classify pixels as background and foreground. Then just adjust levels of background to saturate shadows to white while retaining somewhat reasonable edge quality. It helps if your input data is significantly higher resolution than the output. If you have soft edges or just need really good edge quality you'll need to employ an algorithm that for each edge pixel estimates background color, foreground color and transparency. A good approach is the Soft Scissors paper from SIGGRAPH 2007.
Using images of articles of clothing taken against a consistent background, I would like to make all pixels in the image transparent except for the clothing. What is the best way to go about this? I have researched the algorithms that are common for this and the open source library opencv. Aside from rolling my own or using opencv is there an easy way to do this? I am open to any language or platform.
Thanks
If your background is consistend in an image but inconsistent across images it could get tricky, but here is what I would do:
Separate the image into some intensity/colour form such as YUV or Lab.
Make a histogram over the colour part. Find the most occuring colour, this is (most likely) your background (update) maybe a better trick here would be to find the most occuring colour of all pixels within one or two pixels from the edge of the image.
Starting from the eddges of the image, set all pixels that have that colour and are connected to the edge through pixels of that colour to transparent.
The edge of the piece of clothing is now going to look a bit ugly because it consist of pixels that gain their colour from both the background and the piece of clothing. To combat this you need to do a bit more work:
Find the edge of the piece of clothing through some edge detection mechanism.
Replace the colour of the edge pixels with a blend of the colour just "inside" the edge pixel (i.e. the colour of the clothing in that region) and transparent (if your output image format supports that).
If you want to get really fancy, you increase the transparency depending on how much "like" the background colour the colour of that pixel is.
Basically, find the color of the background and subtract it, but I guess you knew this. It's a little tricky to do this all automatically, but it seems possible.
First, take a look at blob detection with OpenCV and see if this is basically done for you.
To do it yourself:
find the background: There are several options. Probably easiest is to histogram the image, and the large number of pixels with similar values are the background, and if there are two large collections, the background will be the one with a big hole in the middle. Another approach is to take a band around the perimeter as the background color, but this seems inferior as, for example, reflection from a flash could dramatically brighten more centrally located background pixels.
remove the background: a first take at this would be to threshold the image based on the background color, and then run the "open" or "close" algorithms on this, and then use this as a mask to select your clothing article. (The point of open/close is to not remove small background colored items on the clothing, like black buttons on a white blouse, or, say, bright reflections on black clothing.)
OpenCV is a good tool for this.
The trickiest part of this will probably be at the shadow around the object (e.g. a black jacket on a white background will have a continuous gray shadow at some of the edges and where to make this cut?), but if you get this far, post another question.
if you know the exact color intensity of the background and it will never change and the articles of clothing will never coincide with this color, then this is a simple application of background subtraction, that is everything that is not a particular color intensity is considered an "on" pixel, one of interest. You can then use connected component labeling (http://en.wikipedia.org/wiki/Connected_Component_Labeling) to figure out seperate groupings of objects.
for a color image, with the same background on every pictures:
convert your image to HSV or HSL
determine the Hue value of the background (+/-10): do this step once, using photoshop for example, then use the same value on all your pictures.
perform a color threshold: on the hue channel exclude the hue of the background ([0,hue[ + ]hue, 255] typically), for all other channels include the whole value range (0 to 255 typically). this will select pixels which are NOT the background.
perform a "fill holes" operation (normally found along blob analysis or labelling functions) to complete the part of the clothes which may have been of the same color than the background.
now you have an image which is a "mask" of the clothes: non-zero pixels represents the clothes, 0 pixels represents the background.
this step of the processing depends on how you want to make pixels transparent: typically, if you save your image as PNG with an alpha (transparency) channel, use a logical AND (also called "masking") operation between the alpha channel of the original image and the mask build in the previous step.
voilĂ , the background disappeared, save the resulting image.