I am using LibGDX for a small app project, and I need to somehow take a series of sprites and place them (or their pixels rather) into a Pixmap. The basic idea is to take random sprites that are generated through various means while the app is running, and, only at specific times, merge some of them onto a single background sprite.
I believe that most of this can be done easily, but the step of getting the sprite images into the Pixmap isn't quite so obvious to me. The sprites also have various transparent and semi-transparent pixels, so simply grabbing the color at each pixel while it is all on the same screen isn't really applicable either, as it obviously shouldn't take the background colors with it.
If there is a suitable alternative to this that would accomplish what I am looking for I would also love to hear it. Any help is highly appreciated.
I think you want to render your sprites to an off-screen buffer (called an "FBO" or FrameBuffer in libgdx) (blending them as they're added), and then render that offscreen buffer to the screen as a single draw call? If so, this question should help: libgdx SpriteBatch render to texture
This requires OpenGL ES 2.0, which will eliminate support for some older devices.
Related
I'm trying to create a short opener for a clip by using a plane and an animated texture. I created the animated texture sheet, frame by frame, in photoshop. It's a large texture, 12x12 frames. When I try playing it in unity, while it works, it is of a significantly lower quality.
I have seen posts about tweaking my import settings, but these are the only ones I see (no max size etc)
I did have to use an older version of unity to make it work with the rest of the project I was working on - is that the problem? I feel like even older versions should be capable of generating good quality
Disable mipmaps. Mipmaps are downsized versions of your texture used for rendering at different distances. If the distance you have your image from the camera is far enough, Unity will use one of those smaller versions, making it blurry.
Disable blend mode (set it to 'Point'). Bilinear Filtering slightly blurs textures so that they scale better or render at sub-pixel positioning better. However, this makes them less crisp.
You may want to set the texture mode from 'Default' to 'Sprite 2D' or 'GUI', I'm not sure what version of Unity you're on (2017?) as I don't recognize the layout of the inspector you have there. Sprite 2D settings tend to optimize for images that are intended to be pixel perfect, same goes for GUI textures.
I am working on a simple painting app using LibGDX, and I am having trouble getting it to "paint" properly with the setup I am using. The way I am trying to do this is to draw with sprites, and add these individual sprites into a background texture, using LibGDX's FBO commands, when it is appropriate.
The problem I am having is something relating to blending, in that when the sprites are added to this texture that I am building, any transparent pixels of the sprite that are on top of pixels that have been drawn to previous will be brightened up substantially, which obviously doesn't look very good. The following is what the result looks like, using a circle with a green>red gradient as the "brush". The top row is part of the background texture now, while the bottom one is still in its purely sprite drawn form.
http://i238.photobucket.com/albums/ff307/Muriako/hmm.png
Basically, the transparent areas of each sprite are brightening anything below them, and I need to make them completely transparent. I have messed around with many different blending mode combinations and couldn't find one that was any better. GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA for example did not have this problem, but instead the transparent pixels of each sprite seem to be lowered in alpha and even take on some of the color from the layer below, which seemed even more annoying.
I will be happy to post any code snippets on request, but my code has become a bit of mess since I started trying to fix these problems, so I would rather only put up the necessary bits as necessary.
What order are you drawing the sprites in? Alpha blending only works with respect to pixels already in the target, so you have to draw all alpha-containing things (and everything "behind" them) in Z order to get the right result. I'm using .glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA);
I couldn't post the image, but I use the "CGContextDrawRadialGradient" method to draw a shaded blue ball (~40 pixel diameter), it's shadow and to make a "pulsing" white ring around the ball (inner and outer gradients on the ring). The ring starts at the edges of the blue ball and expands outward (radius grows with a timer). The white ring fades as it expands outward like a radio wave.
Looks great running in the simulator but runs incredibly slow on the iPhone 4. The ring should pulse in about a second (as in simulator), but takes 15-20 seconds on the phone. I have been reading a little about CALayer, CGLayer and reading some segments on a some gradient animation, but it isn't clear what I should be using for best performance.
How do I speed this up. Should I put the ball on a layer and the expanding ring on another layer? If so, how do I know which layer to update on a drawrect?
Appreciate any guidance. Thanks.
The only way to speed something like that up is to pre-render it. Determine how many image frames you need to make it look good and then draw each frame into a context you created with CGBitmapContextCreate and capture the image using CGBitmapContextCreateImage. Probably the easiest way to animate the images would be to set the animationImages property of a UIImageView (although there are other options, like CALayer animations).
The newest Apple docs finally mention which pixel formats are supported in iOS so make sure to reference those when creating your bitmap context.
I am doing my iPhone graphics using OpenGL. In one of my projects, I need to use an image, which I need to use as a texture in OpenGL. The .png image is 512 * 512 in size, its background is transparent, and the image has a thick blue line in its center.
When I apply my image to a polygon in OpenGL, the texture appears as if the transparent part in the image is black and the thick blue line is seen as itself. In order to remove the black part, I used blending:
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
Then my black part of the texture in the polygon is removed. Now only the blue band is seen. Thus the problem is solved.
But I want to add many such images and make many objects in OpenGL. I am able to do that, but the frame rate is very low when I add more and more images to objects. But when I comment out blending, the frame rate is normal, but the images are not seen.
Since I do not have good fps, the graphics are a bit slow and I get a shaky effect.
So:
1) Is there any other method than blending to solve my problem?
2) How can I improve the frame rate of my OpenGL app? What all steps need to be taken in order to implement my graphics properly?
If you want to have transparent parts of an object, the only way is to blend to pixel data for the triangle with what is currently in the buffer (what you are currently doing). Normally, when using solid textures, the new pixel data for a triangle just overwrites what ever was in buffer (as long as it is closers, ie z-buffer). But with transparency, it has start looking at the transparency of that part of the texture, look at what is behind it, all the way back to something solid. Then has combine all of those over lapping layers of transparent stuff till you get the final image.
If all you are wanting your transparency for is something like a simple tree sprite, and removing the 'stuff' form the sides of the trunk etc. Then you may be better of providing more complex geometry that actually defines the shape of the trunk and thus not need to bother with transparency.
Sadly, I don't think there is much you can do to try to speed up your FPS, other then cut down the amount of transparency you are calculating. Maybe even adding some optimization that checks images to see if it can turn of alpha blending for this image or not. Depending on how much you are trying to push through, may save time in the long wrong.
I'm writing a drawing application, and the drawing canvas is an OpenGL texture. When you draw onto the canvas, it determines which region of the canvas texture has been changed, and copies that pixel data out (using glReadPixels) before applying the changes you made.
To undo, I want to simply revert to the previous texture state using that pixel data that was copied out. However, OpenGL ES doesn't provide a glDrawPixels command. What's the best way to do it?
I've considered two options, but I'm not sure either is that great:
Create a temporary texture using the pixels I copied out and draw that in. (However, copied region is not a power of two!)
Unbind the large canvas texture completely, manually alter the bytes of the texture, and then put it back into OpenGL. I'm not using any sort of compression, so this might not be that bad. But it seems like a hack?
Anybody have any ideas? I'd really appreciate it!
In case anyone stumbles across this while trying to do something similar, I've come up with a solution that seems to work well.
Grab an image of the current texture by binding it to the framebuffer and then writing the framebuffer to a CGImageRef.
Create a new CGContext and draw in the existing texture CGImageRef. Then draw old texture data in to the portion that the user changed, effectively "undoing" that change to the image.
Destroy old OpenGL texture and create a texture from the CGContext.
I think this is a pretty slow way of going about things, but I don't need huge performance - my real concern was limiting the amount of data being kept to represent the "old" texture.
If you need help with this (there's quite a bit of code) feel free to email me.