Change image hue libgdx - image

I am trying to create an app in which I have a single cloth picture (a women ethnic top, with obviously shadow effect to make it look real) and I want the user to click on a color tab on the right side, so that the color of the top changes, but the shadow effect should remain the same.
I am using libgdx for this. I have created a texture from the input top image and used linear filter to smoothen it. the original top looks like:
The new cloth that I want to place on the top is:
Initially I converted every pixel of the shirt from rgb space to hsv space. I tried 4 methods to achieve the above problem:
I used hue,saturation from new cloth, and brightness(I have to take shadows, they have low brightness) from the shirt. The problem here is the fabric color becomes light due to the low brightness in the top (brightness comes from the top). It produces the below image:
I used hue,saturation from the new cloth, and set a cut off for the brightness from the top. 70% is the cut off. The problem here is that the shadows are not clear as the fabric is dark:
I used z-score to calculate brightness at each pixel. The z score for brightness at every pixel in the blouse is used to adjust the brightness in the cloth. z score for every pixel is made the same, and the brightness for the fabric is calculates. This image is also not looking much acceptable:
I need a help on what approach should I use to achieve the above scenario. Am I on the right track, or I am doing something completely wrong. May be I should remove the outer color to see if the top looks realistic (since the background and top color become same, is this the reason for an unrealistic result?

First of all I'm not really sure if LibGDX is proper and best tool for the things you want to implement. For me it rather looks like a job for a library connected with image processing like OpenCV mobile for example.
The truth is that I haven't used it yet (although I have some experience with OpenCV itself) and the fast googleing didin't return any "official" tutorial how to integrate this with libGDX (although I'm pretty sure it is possible).
Anyway - if you want to use LibGDX to operate on colour like this you have a SpriteBatch'es method
void setColor(Color tint)
that you can use for colorizing the SpriteBatch you are using for rendering things.
Due to default generated project the effect is as you can see:
without tint:
batch.begin();
batch.draw(img, 0, 0);
batch.end();
tint applied:
batch.begin();
batch.setColor(Color.GREEN);
batch.draw(img, 0, 0);
batch.end();
Please also notice that tinting image is naive that means that it will colorize also background so you need to cut the image from it's background and save it in .PNG format.
Take a look at this example:

Related

LÖVE viewport like Libgdx

I wonder if LÖVE framework has the same feature like Libgdx's viewport, because this feature were really great when I used Libgdx and I wonder if there's anything similar to do in LÖVE.
About viewports: https://github.com/libgdx/libgdx/wiki/Viewports
If, by viewport, you mean using normalised coordinates (resolution-independant), then yes, LÖVE can do that.
Although it's not available by default in the framework itself, there's always a possibility to add your own features.
You could make a Viewport system using LÖVE's canvases.
Start by creating a canvas with fixed dimensions,
then make your game using percentages of these dimensions instead of regular pixel positioning.
For example, player.x = 80 (left side of the screen) becomes player.x = canvas:getWidth()*.1
Once you've drawn everything into your virtual window -that is- the canvas, you can scale it and render your game to fit any window resolution.
I suggest that you take a look at this library that handles all the scaling stuff for you, once you provide your game's virtual dimensions.

Generating fast color rectangles

I am designing a more powerful color picker for Qt and looking for some advice. How would one go about generating fast real-time color rectangles such as the ones found in Photoshop (for HSB and RGB). I was originally thinking of using QImage and scanline to calculate all the pixels individually but this would probably be too slow.
I was thinking it would be better to write an OpenGL shader. As I can recall you can assign colors to vertices and it would interpolate the changes for you. I just have no idea how this would be done in Qt or if this is even worth the effort.
I am using QGraphicsView to display the rectangle. Any advice would be appreciated.
Ok so looking into QGradients a bit more could you not use multiple QGradient to create the effect you need?
For the last of the 3 examples you could create a single gradient with multiple stops for the colours themselves then overlay this with a QGradient of black (alpha 0) to black (alpha 255) with apropriate stops to get the gradient to come in at the right point.

Animate Sprite using color in top to bottom effect

How to spread color Top to Bottom using CCTintTo method in sprite.
Because i am using CCTintTo method to spread color in sprite. But i want to look like spreading color in top to bottom.
What to do for this types of animation.
Thanks in advance.
If you are using cocos2d 2.0+, you could write a shader to do this, and set the shaderProgram property of the sprite. Not that hard, follow the examples in the distribution. My first shader took me .5 work days to get to work, and maybe another .5 workday to properly integrate that technique properly in my overall software architecture. g'luck :).
Look here for an introduction to shaders, play with it in a side project until you are comfortable to integrate in your main trunk.
That's not possible with color tinting. Changing a node's color property or using the tint actions will only tint the entire sprite with a single color, there will be no gradient.
You would have to custom draw the sprite and apply/adapt the gradient rendering code from CCLayerGradient.
Yes CCLayerGradient is what you are looking for.By the way what technique you are using for coloring.If you are using CGContextSetRGBFillColor method to fill the sprite color then it would be tricky but if you are using images to fill color then nice sequence of images plist can be used to produce animation which will give you the exact effect you are looking for.
Take the sequence of images and animate them using CCSpriteBatchNode otherwise if you want to use RGB colors to fill sprite then you have to look for gradient effect.

LibGDX - Sprites to texture using FBO

I am working on a simple painting app using LibGDX, and I am having trouble getting it to "paint" properly with the setup I am using. The way I am trying to do this is to draw with sprites, and add these individual sprites into a background texture, using LibGDX's FBO commands, when it is appropriate.
The problem I am having is something relating to blending, in that when the sprites are added to this texture that I am building, any transparent pixels of the sprite that are on top of pixels that have been drawn to previous will be brightened up substantially, which obviously doesn't look very good. The following is what the result looks like, using a circle with a green>red gradient as the "brush". The top row is part of the background texture now, while the bottom one is still in its purely sprite drawn form.
http://i238.photobucket.com/albums/ff307/Muriako/hmm.png
Basically, the transparent areas of each sprite are brightening anything below them, and I need to make them completely transparent. I have messed around with many different blending mode combinations and couldn't find one that was any better. GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA for example did not have this problem, but instead the transparent pixels of each sprite seem to be lowered in alpha and even take on some of the color from the layer below, which seemed even more annoying.
I will be happy to post any code snippets on request, but my code has become a bit of mess since I started trying to fix these problems, so I would rather only put up the necessary bits as necessary.
What order are you drawing the sprites in? Alpha blending only works with respect to pixels already in the target, so you have to draw all alpha-containing things (and everything "behind" them) in Z order to get the right result. I'm using .glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA);

Edge Detection and transparency

Using images of articles of clothing taken against a consistent background, I would like to make all pixels in the image transparent except for the clothing. What is the best way to go about this? I have researched the algorithms that are common for this and the open source library opencv. Aside from rolling my own or using opencv is there an easy way to do this? I am open to any language or platform.
Thanks
If your background is consistend in an image but inconsistent across images it could get tricky, but here is what I would do:
Separate the image into some intensity/colour form such as YUV or Lab.
Make a histogram over the colour part. Find the most occuring colour, this is (most likely) your background (update) maybe a better trick here would be to find the most occuring colour of all pixels within one or two pixels from the edge of the image.
Starting from the eddges of the image, set all pixels that have that colour and are connected to the edge through pixels of that colour to transparent.
The edge of the piece of clothing is now going to look a bit ugly because it consist of pixels that gain their colour from both the background and the piece of clothing. To combat this you need to do a bit more work:
Find the edge of the piece of clothing through some edge detection mechanism.
Replace the colour of the edge pixels with a blend of the colour just "inside" the edge pixel (i.e. the colour of the clothing in that region) and transparent (if your output image format supports that).
If you want to get really fancy, you increase the transparency depending on how much "like" the background colour the colour of that pixel is.
Basically, find the color of the background and subtract it, but I guess you knew this. It's a little tricky to do this all automatically, but it seems possible.
First, take a look at blob detection with OpenCV and see if this is basically done for you.
To do it yourself:
find the background: There are several options. Probably easiest is to histogram the image, and the large number of pixels with similar values are the background, and if there are two large collections, the background will be the one with a big hole in the middle. Another approach is to take a band around the perimeter as the background color, but this seems inferior as, for example, reflection from a flash could dramatically brighten more centrally located background pixels.
remove the background: a first take at this would be to threshold the image based on the background color, and then run the "open" or "close" algorithms on this, and then use this as a mask to select your clothing article. (The point of open/close is to not remove small background colored items on the clothing, like black buttons on a white blouse, or, say, bright reflections on black clothing.)
OpenCV is a good tool for this.
The trickiest part of this will probably be at the shadow around the object (e.g. a black jacket on a white background will have a continuous gray shadow at some of the edges and where to make this cut?), but if you get this far, post another question.
if you know the exact color intensity of the background and it will never change and the articles of clothing will never coincide with this color, then this is a simple application of background subtraction, that is everything that is not a particular color intensity is considered an "on" pixel, one of interest. You can then use connected component labeling (http://en.wikipedia.org/wiki/Connected_Component_Labeling) to figure out seperate groupings of objects.
for a color image, with the same background on every pictures:
convert your image to HSV or HSL
determine the Hue value of the background (+/-10): do this step once, using photoshop for example, then use the same value on all your pictures.
perform a color threshold: on the hue channel exclude the hue of the background ([0,hue[ + ]hue, 255] typically), for all other channels include the whole value range (0 to 255 typically). this will select pixels which are NOT the background.
perform a "fill holes" operation (normally found along blob analysis or labelling functions) to complete the part of the clothes which may have been of the same color than the background.
now you have an image which is a "mask" of the clothes: non-zero pixels represents the clothes, 0 pixels represents the background.
this step of the processing depends on how you want to make pixels transparent: typically, if you save your image as PNG with an alpha (transparency) channel, use a logical AND (also called "masking") operation between the alpha channel of the original image and the mask build in the previous step.
voilà, the background disappeared, save the resulting image.

Resources