I want to draw some text on a memory buffer, but don't know how to. please help, many thanks!
Code:
widht = 640;
height = 480;
pBuf = malloc( width * 3 * height );
memset( pBuf, 0, width * 3 * height );
//DrawText is the function that needs to be implemented, and I don't know how to do it.
DrawText( "this is the text that i want to draw on the buffer", pBuf, width, height, 3 );
//now there is text on the memory buffer pointed to by pBuf.
Related
I have to fill the ffmpeg AVFrame->data from a cairo surface pixel data. I have this code:
/* Image info and pixel data */
width = cairo_image_surface_get_width( surface );
height = cairo_image_surface_get_height( surface );
stride = cairo_image_surface_get_stride( surface );
pix = cairo_image_surface_get_data( surface );
for( row = 0; row < height; row++ )
{
data = pix + row * stride;
for( col = 0; col < width; col++ )
{
img->video_frame->data[0][row * img->video_frame->linesize[0] + col] = data[0];
img->video_frame->data[1][row * img->video_frame->linesize[1] + col] = data[1];
//img->video_frame->data[2][row * img->video_frame->linesize[2] + col] = data[2];
data += 4;
}
img->video_frame->pts++;
}
But the colors in the exported video are wrong. The original heart is red. Can someone point me in the right direction? The encode.c example is useless sadly and on the Internet there is a lot of confusion about Y, Cr and Cb which I really don't understand. Please feel free to ask for more details. Many thanks.
You need to use libswscale to convert the source image data from RGB24 to YUV420P.
Something like:
int width = cairo_image_surface_get_width( surface );
int height = cairo_image_surface_get_height( surface );
int stride = cairo_image_surface_get_stride( surface );
uint8_t *pix = cairo_image_surface_get_data( surface );
uint8_t *data[1] = { pix };
int linesize[1] = { stride };
struct SwsContext *sws_ctx = sws_getContext(width, height, AV_PIX_FMT_RGB24 ,
width, height, AV_PIX_FMT_YUV420P,
SWS_BILINEAR, NULL, NULL, NULL);
sws_scale(sws_ctx, data, linesize, 0, height,
img->video_frame->data, img->video_frame->linesize);
sws_freeContext(sws_ctx);
See the example here: scaling_video
I use glReadPixels to read pixels data to bitmap, and get a wrong bitmap.
Main code is blow:
jni code
jint size = width * height * 4;
GLubyte *pixels = static_cast<GLubyte *>(malloc(size));
glReadPixels(
0,
0,
width,
height,
GL_RGBA,
GL_UNSIGNED_BYTE,
pixels
)
kotlin code
val bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888)
var dataBuf = ByteBuffer.wrap(pixels)
dataBuf.rewind()
bitmap.copyPixelsFromBuffer(dataBuf)
And get the wrong bitmap like blow
The correct one should like this
Anyone can tell me where is wrong?
The reason is that the texture has been rotated and the sort to read pixels has been changed.
I am try to create an HBITMAP from pixel buffer and display it. Here is my code to create the HBITMAP
char buffer[640 * 480 * 3];
memset(buffer, 255, 640 * 480 * 3);
BITMAPINFO bm = { sizeof(BITMAPINFOHEADER),
640,
480, 1, 24,
BI_RGB, 640 * 480 * 3, 0, 0, 0, 0 };
HBITMAP imageBmp = CreateDIBSection(hdc, &bm, DIB_RGB_COLORS, (void**)buffer, 0, 0);
if (imageBmp == NULL) {
DWORD lastError = GetLastError();
return;
}
Here is the code to display it:
HDC imageDC = CreateCompatibleDC(NULL); // create an offscreen DC
SelectObject(imageDC, imageBmp); // put the loaded image into our DC
RECT rect;
GetClientRect(hDlg, &rect);
BitBlt(
hdc, // tell it we want to draw to the screen
0, 0, // as position 0,0 (upper-left corner)
rect.right - rect.left, // width of the rect to draw
rect.bottom - rect.top, // height of the rect
imageDC, // the DC to get the rect from (our image DC)
0, 0, // take it from position 0,0 in the image DC
SRCCOPY // tell it to do a pixel-by-pixel copy
);
I am expecting to see a white image, but what I got was a black window screen. I am pretty sure my display code is correct, but do not know why the code to create HBITMAP was wrong.
CreateDIBSection already returns an allocated buffer through the ppvBits argument to you, so it overwrites your buffer variable. From the docs (emphasis mine):
ppvBits A pointer to a variable that receives a pointer to the
location of the DIB bit values.
Fixes required to your code:
Remove code to create an array.
Pass the address of a pointer for the ppvBits parameter.
Set the pixels only after a successfull call to CreateDIBSection.
char* buffer = NULL;
BITMAPINFO bm = { sizeof(BITMAPINFOHEADER),
640,
480, 1, 24,
BI_RGB, 640 * 480 * 3, 0, 0, 0, 0 };
HBITMAP imageBmp = CreateDIBSection(hdc, &bm, DIB_RGB_COLORS, (void**) &buffer, 0, 0);
if (imageBmp == NULL) {
DWORD lastError = GetLastError();
return;
}
memset(buffer, 255, 640 * 480 * 3);
Note:
Make sure that in production code, you properly calculate the size by aligning the bitmap width to the next DWORD boundary, as described by the article "DIBs and Their Use":
Calculating the size of a bitmap is not difficult:
biSizeImage = ((((biWidth * biBitCount) + 31) & ~31) >> 3) * biHeight
The crazy roundoffs and shifts account for the bitmap being
DWORD-aligned at the end of every scanline.
In your sample, 640 * 480 * 3 gives the correct result only because the width of 640 is already divisable by 4. For a width of 641 your formula would fail, while the formula cited from the article would give the correct result.
I need to import a PNG and display it on screen in a Motif application. For reasons best known to myself, I don't want to use any more libraries than I need to, and I'd like to stick with just Motif and pnglib.
I've been battling with this for a couple of days now, and I'd like to put aside my pride and ask for some help. This screenshot shows the problem:
https://s3.amazonaws.com/gtrebol264929/pnglib_fail.png
The window on the right shows what the image should look like, the window on the left is my Motif application showing what it looks like in my app. Clearly I've got the image data OK, as the basic concept of the picture can be seen. But also clearly I've messed up how I get the pixel data from pnglib into an XImage. Below is my code:
char * xdata = malloc(width * height * (channels + 1));
memset(xdata,100,width * height * channels);
int colc = 0;
int bytec = 0;
while (colc < width) {
int rowc = 0;
while(rowc < height) {
png_byte * row = png.row_pointers[rowc];
memcpy(&xdata[bytec],&row[colc],1);
bytec += 4;
rowc += 1;
}
colc += 1;
}
XImage * img = XCreateImage(display, CopyFromParent, depth * channels, ZPixmap, 0, xdata, width, height, 32, bytes_per_line);
printf("PNG %ix%i (depth: %i x %i) img: %p\n",width,height,depth,channels,img);
XPutImage (display, win, gc, img, 0, 0, 0, 0, width, height); // 0, 0, 0, 0 are src x,y and dst x,y
png.row_pointers is the pixel data from pnglib.
I'm pretty sure I've just misunderstood how the pixel data is stored, but I can't quite work out what I've done wrong. Any help is very much appreciated.
All the best
Garry
I am creating a monochrome image with the following code:
CGColorSpaceRef cgColorSpace = CGColorSpaceCreateDeviceGray();
CGImageRef cgImage = CGImageCreate (width, height, 1, 1, rowBytes, colorSpace, 0, dataProvider, decodeValues, NO, kCGRenderingIntentDefault);
where decodeValues is an array of 2 CGFloat's, equal to {0,1}. This gives me a fine image, but apparently my data (which comes from a PDF image mask) is black-on-white instead of white-on-black. To invert the image, I tried to set the values of decodeValues to {1,0}, but this did not change anything at all. Actually, whatever nonsensical values I put into decodeValues, I get the same image.
Why is decodeValues ignored here? How do I invert black and white?
here's some code for creating and drawing a mono image. It's the same as yours but with more context (and without necessary cleanup):
size_t width = 200;
size_t height = 200;
size_t bitsPerComponent = 1;
size_t componentsPerPixel = 1;
size_t bitsPerPixel = bitsPerComponent * componentsPerPixel;
size_t bytesPerRow = (width * bitsPerPixel + 7)/8;
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceGray();
CGBitmapInfo bitmapInfo = kCGImageAlphaNone;
CGFloat decode[] = {0.0, 1.0};
size_t dataLength = bytesPerRow * height;
UInt32 *bitmap = malloc( dataLength );
memset( bitmap, 255, dataLength );
CGDataProviderRef dataProvider = CGDataProviderCreateWithData( NULL, bitmap, dataLength, NULL);
CGImageRef cgImage = CGImageCreate (
width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorspace,
bitmapInfo,
dataProvider,
decode,
false,
kCGRenderingIntentDefault
);
CGRect destRect = CGRectMake(0, 0, width, height);
CGContextDrawImage( context, destRect, cgImage );
If i change the decode array to CGFloat decode[] = {0.0, 0.0}; i always get a black image.
If you have tried that and it didn't have any effect (you say you get the same image whatever values you use), either: you aren't actually passing in those values but you think you are, or: somehow you aren't actually examining the output of CGImageCreate.