SDL2 library, no alpha channel - format

I ama using SDL2 and my pixel format is SDL_PIXELFORMAT_RGB888. I want alpha channel in my textures, but this format doesn't support alpha. My window is created with
m_windowOwnPtr = SDL_CreateWindow(windowTitle,
SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED,
m_screenWidth * magnificationLevel,
m_screenHeight * magnificationLevel, 0);
and my surface is m_SurfaceNoOwnPtr = SDL_GetWindowSurface(m_optrWindow);
How can I change the pixel format to SDL_PIXELFORMAT_RGBA8888 (for example) or something else to enable alpha channel support?
Thanks.
I tried to set variables of SDL_PixelFormat struct manually but this did not solve my problem.
I tried to create new surface from my surface with SDL_ConvertSurfaceFormat and SDL_PIXELFORMAT_RGBA8888 enum member and to free my old surface and assign the new surface to the window, but this did not work too.

Related

OpenGL ES multiple color buffer

I am using a frame buffer, with 2 color attachments. I want to render, in one render call, into both color attachments.
layout (location = 0) out vec3 _color;
layout (location = 1) out vec3 _depth;
_color = texture(_colorImage, coord).xyz;
_depth = texture(_depthImage, coord).xyz;
I testet my application of the computer, but now i want to use the same on a mobile application, but how can I render in more than one color attachment in OpenGL ES ?
The preferred version would be OpenGL 2.0. But I don't need to.
For color attachments you can't render to more than one in OpenGL ES 2.0; the API doesn't support it.
For OpenGL ES 3.0 onwards it works exactly the same as OpenGL Multiple Render Targets.

How do I set my palette to grayscale in c++

I'm reading live feed from a mono camera and need to take a snapshot when pressing a button.
So when I convert the pointer the camera passes me to a bitmap (8 bpp) for further image processing,
the colors get all weird like this
I'm guessing it's because I didn't set the palette of the bitmap correctly, so I googled some and
came across some code in VB, which sets the color palette of a bitmap to gray scale.
I want to do the same in C++ under Visual Studio 2010, only in C++ the constructor of color palette is sealed,thus I have no way of declaring a new grayscale palette and then assign it to my bitmap.
Below is the example code I found written in Visual Basics :
Dim bmpobj As Bitmap
Dim pal As System.Drawing.Imaging.ColorPalette
pal = bmpobj.Palette
For i = 0 To 255
pal.Entries(i) = Color.FromArgb(i, i, i)
Next
bmpobj.Palette = pal
So what I'm asking is this : Is there anyway to set my bitmap's palette to grayscale ?
I think, just by changing the color palette, your image won't change to gray scale.
You will need to change each color pixel in the image to gray scale. The algorithms are here.

drawing in Windows memory device context - resolution?

I am trying to export a plot generated by my program in the form of a bitmap. No problem with creating a bitmap in memory (with CreateDIBSection) and saving it on the disk (using GDI+). To draw I have to use device context, and the only one that is easily available is compatible with the screen. So I create a compatible dc, select the bitmap I already created into this device context and I am ready to draw and print into the bitmap. And it works - but it gives me no control over the size of the plot (note: size of the plot, not size of the bitmap). If I understand correctly what is happening mapping modes follow DPI of the screen DC which in turn means size of the plot (and text I put on the plot) is different on different computers.
Is there any way of changing the DPI resolution for the device context? Or perhaps there exist a better way of doing what I am trying to do? Perfect solution would be to ask user for the pixel bitmap size and be able to draw a plot that nicely fits the bitmap.
You don't have to use device context to draw now that you already use Gdiplus over GDI. You just associate your Gdiplus::Graphics object with a Gdiplus::Bitmap instead of HDC. Units and transformations let alone bitmap size are all independent of the device. Hope that helps.
Gdiplus::Bitmap bitmap( L"miranda_kerr.png" ); // draw over existing
Gdiplus::Graphics graphics( &bitmap );
Gdiplus::Pen pen( Gdiplus::Color(255,0,0));
Gdiplus::Status status = graphics.DrawLine( &pen, 20, 20, 100, 500 );
//...

How to read the original alpha channel from PNG in J2ME?

I'm writing a simple J2ME game that uses PNG images with 8-bit alpha channel. Problem: not all hardware supports full alpha transparency rendering. However, since my game is pretty static in nature (at the beginning, "sprites" are layed out onto background image, based on current screen size, and that's about it), I thought it would be possible to prerender those transparent images directly onto background during game initialization and use that later in game. I can't prerender them in Photoshop as their positions are not known in advance.
But, it seems there is no way to read the original alpha channel on devices that do not support semi-transparency as it gets resampled during PNG loading. Is there some library that can help with that? Or is it a good idea to store alpha channels separately (e.g. as separate 8-bit PNG images) and manually apply them?
Thanks!
PNG Images also have transparency support if you want to create transparent image then you have read RGB data along with alpha channels and process alpha
Image transPNG=Image.createImage("/trans.png"); //load the tranparent image
int rgbData[];
transPNG.getRGB(rgbData, 0,transPNG.getWidth(), 0, 0,transPNG.getWidth(), transPNG.getHeight());
Image tranparentImage=Image.createRGBImage(rgbData, width, height, true); //process alpha
transPNG=null;
Above code shows how to create the transparent image and use.
I cant promise this will help, but you can try this way of reading the alpha channel using standard methods from Java util.
BufferedImage image = ImageIO.read(new File(name));
int[] alpha = new int[1]; //containg alpha-value for one pixel.
image.getAlphaRaster().getPixel(x, y, alpha);
System.out.println(alpha[0]); //gives the alpha value for x,y

How to draw ARGB bitmap using GDI+?

I have valid HBITMAP handle of ARGB type. How to draw it using GDI+?
I've tried method:
graphics.DrawImage(Bitmap::FromHBITMAP(m_hBitmap, NULL), 0, 0);
But it doesn't use alpha channel.
I've got working sample:
Get info using bitmap handle: image size, bits
BITMAP bmpInfo;
::GetObject(m_hBitmap, sizeof(BITMAP), &bmpInfo);
int cxBitmap = bmpInfo.bmWidth;
int cyBitmap = bmpInfo.bmHeight;
void* bits = bmpInfo.bmBits;
Create & draw new GDI+ bitmap using bits with pixel format PixelFormat32bppARGB
Gdiplus::Graphics graphics(dcMemory);
Gdiplus::Bitmap bitmap(cxBitmap, cyBitmap, cxBitmap*4, PixelFormat32bppARGB, (BYTE*)bits);
graphics.DrawImage(&bitmap, 0, 0);
I had similar issues getting my transparent channel to work. In my case, I knew what the background color should be used for the transparent area (it was solid). I used the Bitmap.GetHBITMAP(..) method and passed in the background color to be used for the transparent area. This was a much easier solution that other attempts I was trying using LockBits and re-creating the Bitmap with PixelFormat32bppARGB, as well as cloning. In my case, the Bitmap was ignoring the alpha channel since it was created from Bitmap.FromStream.
I also had some very strange problems with the background area of my image being changed slightly. For example, instead of pure white, it was off white like 0xfff7f7. This was the case whether I was using JPG (with blended colors) or with PNG and transparent colors.
See my question and solution at
GDI+ DrawImage of a JPG with white background is not white
Ah... but .Net doesn't use HBITMAP and GDI+ is a C++ library atop the basic Windows GDI, so I'm assuming you're using non-.Net C++.
GDI+ has a Bitmap class, which has a FromHBITMAP() method.
Once you have the GDI+ Bitmap instance, you can use it with the GDI+ library.
Of course, if you can write your program in C# using .Net it will be a lot easier.

Resources