Compose new 32bit bitmap with alpha channel from other 32bit bitmaps - winapi

I several a 32bit bitmap with Alpha channel.
I need to compose a new Bitmap that has again an alpha channel. So the final bitmap is later used with AlphaBlend.
There is no need for stretching. If there would be no alpha channel, I would just use BitBlt to create the new bitmap.
I am not using managed code, I just want to do this with standard GDI / WinAPI functions. Also I am interested in a solution that there is no need for some special libraries.
TIA
Note: I know that I can use several AphaBlend functions to do the same composition in the final output. But for the ease of use in my program I would prefer to compose such a bitmap once.

You can go through every pixel and compose them manually:
void ComposeBitmaps(BITMAP* bitmaps, int bitmapCount, BITMAP& outputBitmap)
{
for(int y=0; y<outputBitmap.bmHeight; ++y)
{
for(int x=0; x<outputBitmap.bmWidth; ++x)
{
int b = 0;
int g = 0;
int r = 0;
int a = 0;
for(int i=0; i<bitmapCount; ++i)
{
unsigned char* samplePtr = (unsigned char*)bitmaps[i].bmBits+(y*outputBitmap.bmWidth+x)*4;
b += samplePtr[0]*samplePtr[3];
g += samplePtr[1]*samplePtr[3];
r += samplePtr[2]*samplePtr[3];
a += samplePtr[3];
}
unsigned char* outputSamplePtr = (unsigned char*)outputBitmap.bmBits+(y*outputBitmap.bmWidth+x)*4;
if(a>0)
{
outputSamplePtr[0] = b/a;
outputSamplePtr[1] = g/a;
outputSamplePtr[2] = r/a;
outputSamplePtr[3] = a/bitmapCount;
}
else
outputSamplePtr[3] = 0;
}
}
(Assuming all bitmaps are 32-bit and have the same width and height)
Or, if you want to draw bitmaps one on top of another, rather than mix them in equal proportions:
unsigned char* outputSamplePtr = (unsigned char*)outputBitmap.bmBits+(y*outputBitmap.bmWidth+x)*4;
outputSamplePtr[3] = 0;
for(int i=0; i<bitmapCount; ++i)
{
unsigned char* samplePtr = (unsigned char*)bitmaps[i].bmBits+(y*outputBitmap.bmWidth+x)*4;
outputSamplePtr[0] = (outputSamplePtr[0]*outputSamplePtr[3]*(255-samplePtr[3])+samplePtr[0]*samplePtr[3]*255)/(255*255);
outputSamplePtr[1] = (outputSamplePtr[1]*outputSamplePtr[3]*(255-samplePtr[3])+samplePtr[1]*samplePtr[3]*255)/(255*255);
outputSamplePtr[2] = (outputSamplePtr[2]*outputSamplePtr[3]*(255-samplePtr[3])+samplePtr[2]*samplePtr[3]*255)/(255*255);
outputSamplePtr[3] = samplePtr[3]+outputSamplePtr[3]*(255-samplePtr[3])/255;
}

I found the following solution that fits best for me.
I Create a new target bitmap with CreateDIBSection
I prefill the new bitmap with fully transparent pixels. (FillMemory/ZeroMemory)
I Receive the Pixel that needs to be copied with GetDIBits. If possible form the width I directly copy the rows into the buffer I previously created. Otherwise I copy the data row by row into the buffer created in step.
The resulting bitmap can be used with AlphaBlend and in CImageList objects.
Because the bitmaps don't overlap I don't need take care about the target data.

Related

Segmentation Fault accessing qpscale_table in AVFrame

I'm modifying this file slightly: https://gist.github.com/yohhoy/f0444d3fc47f2bb2d0e2
This code decodes a video and makes opencv Mats out of the frame pixels as it goes.
In particular I only want to grab frames that have specific macroblock-related data. I'm attempting to get that data like this:
total_qp = get_total_qp(decframe->qscale_table, mb_width, mb_height, mb_stride);
However, whenever I try to access the data by iterating over that array, I get a segmentation fault:
static float get_total_qp(int8_t *qscale_table, int mb_width, int mb_height, int mb_stride)
{
int mb_count = mb_height * mb_width;
int y, x;
float qp_total = 0.0f;
for (y = 0; y < mb_height; y++) {
for (x = 0; x < mb_width; x++) {
qp_total += qscale_table[x + y * mb_stride]; <-- SEGFAULT here
}
}
return qp_total;
}
I've also tried sending in:
frame->qscale_table
and I've tried populating it, but this own't compile because it can't find that function:
int8_t *qscale_table = av_frame_get_qp_table(decframe->qscale_table, &mb_stride, &qscale_type);
So my question is this:
Given an AVFrame* how do I ensure that the qscale_table is populated and access it?
It turns out that the qpscale_table doesn't get exported onto the decoded frame after the decoding happens in h264dec.c.
In order to retrieve the values I had to modify the finalize_frame method in h264dec to export the qscale_table onto the frame, like so:
static int h264_export_qp_table(H264Context *h, AVFrame *f, H264Picture *p, int qp_type)
{
AVBufferRef *ref = av_buffer_ref(p->qscale_table_buf);
int offset = 2*h->mb_stride + 1;
if(!ref)
return AVERROR(ENOMEM);
av_assert0(ref->size >= offset + h->mb_stride * ((f->height+15)/16));
ref->size -= offset;
ref->data += offset;
return av_frame_set_qp_table(f, ref, h->mb_stride, f->qscale_type);
}
and add in the call into finalize_frame:
...
if (CONFIG_MPEGVIDEO) {
ff_print_debug_info2(h->avctx, dst, NULL,
out->mb_type,
out->qscale_table,
out->motion_val,
NULL,
h->mb_width, h->mb_height, h->mb_stride, 1);
// NT: make the qscale_table accessible!
h264_export_qp_table(h, dst, out, FF_QSCALE_TYPE_H264);
}
...
And then recompile FFmpeg using these instructions: https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu

Adaptive background subtraction with motionless objects in processing

I would like to get a background subtraction method for outdoor conditions, capable of gradually adjust itself to environment light variations but with the capacity of revealing a presence even that is not in motion.
The problem with adaptive opencv background subtraction methods is that they are only capable to detect a presence when it is moving. On the other hand, old background subtraction methods do not work when the conditions of light are not always the same.
In order to get this I’ve modified the Golan Levin’s method in the video library of Processing (actual frames are compared with a first initial frame), setting a certain low difference threshold.
I therefore assume that all changes over that threshold are due to presence (persons, animals, etc), and changes below this are due to progressive light conditions, and I put this changed pixel in the background’s pixels array.
/* auto-updating background part*/
diferencia = diffR+diffG+diffB;
if (diferencia<minDif) backgroundPixels[i]=video.pixels[i];
That’s not working satisfactorily, image gets dirty, far off from being homogenous. Any idea of how to achieve this would be extremely welcome.
I post the whole code, if it could be of some help. Thanks a lot for your time.
import processing.video.*;
int numPixels;
int[] backgroundPixels;
Capture video;
int camSel=0;
int topDiff=763;
int unbralDif=120;
int mindDif=20;
boolean subtraction, lowSubtr;
PGraphics _tempPG;
void setup() {
size(640, 480);
_tempPG=createGraphics(width, height);
if (camSel==0)video = new Capture(this, width, height);
else video = new Capture(this, width, height, Capture.list()[1]);
video.start();
numPixels = video.width * video.height;
backgroundPixels = new int[numPixels];
loadPixels();
}
void draw() {
if (video.available()) {
video.read();
video.loadPixels();
int presenceSum = 0;
for (int i = 0; i < numPixels; i++) {
color currColor = video.pixels[i];
color bkgdColor = backgroundPixels[i];
int currR = (currColor >> 16) & 0xFF;
int currG = (currColor >> 8) & 0xFF;
int currB = currColor & 0xFF;
int bkgdR = (bkgdColor >> 16) & 0xFF;
int bkgdG = (bkgdColor >> 8) & 0xFF;
int bkgdB = bkgdColor & 0xFF;
int diffR = abs(currR - bkgdR);
int diffG = abs(currG - bkgdG);
int diffB = abs(currB - bkgdB);
presenceSum += diffR + diffG + diffB;
pixels[i] = 0xFF000000 | (diffR << 16) | (diffG << 8) | diffB;
/* auto-updating background part*/
int diferencia = diffR+diffG+diffB;
//detect pixels that have change below a threshold
if (lowSubtr && diferencia<mindDif) {
/* substitute with them the backgound img array */
backgroundPixels[i]=video.pixels[i];
}
/* end auto-updating background part*/
}
updatePixels();
}
subtraction=false;
}
void keyPressed() {
if (keyPressed)startSubtr();
}
void startSubtr() {
arraycopy(video.pixels, backgroundPixels);
lowSubtr=true;
}
void actualizacion(int[] _srcArr, int[] _inputArr, int _ind) {
for (int i=0; i<_srcArr.length; i++) {
_srcArr[_ind]=_inputArr[i];
}
}

How to access intensity of all the pixels of Image in openCV C++

For accessing single point, I am using this line of code and it works
int intensity = gray_image.at<uchar>(Point(100, 100));
However when I use this code to access all the pixels in image, it gives memory error,
for (int i = 0; i < gray_image.rows;i++)
{
for (int j = 0; j < gray_image.cols; j++) {
intensity += gray_image.at<uchar>(Point(i, j));
}
}
When I run above code, it does not give compile time error but gives memory exception. Where am I going wrong?
You can just skip the use of Point and do the following.
for (int i = 0; i < gray_image.rows;i++)
{
for (int j = 0; j < gray_image.cols; j++) {
intensity += gray_image.at<uchar>(i, j);
}
}
You're requesting a pixel (j,i) that doesn't exist. This wouldn't have been an error in a square image (where the number of rows = number of columns), but you're using a rectangular image.
The Mat::at function has multiple prototypes, the two that you're concerned with are:
C++: template<typename T> T& Mat::at(int i, int j)
C++: template<typename T> T& Mat::at(Point pt)
The documentation for Mat::at states that Point pt is defined as the Element position specified as Point(j,i), so you've effectively swapped your rows and columns.
The reason this happens is because the image is stored in a 1D array of pixels, and to get a pixel Point (r,c) is translated to p = r * image.cols + c;

Simple image processing algorithm causes Processing to freeze

I've written an algorithm in Processing to do the following:
1. Instantiate a 94 x 2 int array
2. Load a jpg image of dimensions 500 x 500 pixels
3. Iterate over every pixel in the image and determine whether it is black or white then change a variable related to the array
4. Print the contents of the array
For some reason this algorithm freezes immediately. I've put print statements in that show me that it freezes before even attempting to load the image. This is especially confusing to me in light of the fact that I have written another very similar algorithm that executes without complications. The other algorithm reads an image, averages the color of each tile of whatever size is specified, and then prints rectangles over the region that was averaged with the average color, effectively pixelating the image. Both algorithms load an image and examine each of its pixels. The one in question is mostly different in that it doesn't draw anything. I was going to say that it was different for having an array but the pixelation algorithm holds all of the colors in a color array which should take up far more space than the int array.
From looking in my mac's console.app I see that there was originally this error: "java.lang.OutOfMemoryError: GC overhead limit exceeded". From other suggestions/sources on the web I tried bumping the memory allocation from 256mb to 4000mb (doing this felt meaningless because my analysis of the algorithms showed they should be the same complexity but I tried anyways). This did not stop freezing but changed the error to a combination of "JavaNativeFoundation error occurred obtaining Java exception description" and "java.lang.OutOfMemoryError: Java heap space".
Then I tried pointing processing to my local jdk with the hope of utilizing the 64 bit jdk over processing's built in 32 bit jdk. From within Processing.app/Contents I executed the following commands:
mv Java java-old
ln -s /Library/Java/JavaVirtualMachines/jdk1.7.0_79.jdk Java
Processing would not start after this attempt with the following error populating my console:
"com.apple.xpc.launchd[1]: (org.processing.app.160672[13559]) Service exited with abnormal code: 1"
Below is my code:
First the noncompliant algorithm
int squareSize=50;
int numRows = 10;
int numCols = 10;
PFont myFont;
PImage img;
//33-126
void setup(){
size(500,500);
count();
}
void count(){
ellipseMode(RADIUS);
int[][] asciiArea = new int[94][2];
println("hello?");
img=loadImage("countingPicture.jpg");
println("image loaded");
for(int i=0; i<(500/squareSize); i++){
for(int j=0; j<(500/squareSize); j++){
int currentValue=i+j*numCols;
if(currentValue+33>126){
break;
}
println(i+", "+j);
asciiArea[currentValue][0]=currentValue+33;
asciiArea[currentValue][1]=determineTextArea(i,j,squareSize);
//fill(color(255,0,0));
//ellipse(i*squareSize,j*squareSize,3,3);
}
}
println("done calculating");
displayArrayContents(asciiArea);
}
int determineTextArea(int i, int j, int squareSize){
int textArea = 0;
double n=0.0;
while(n < squareSize*squareSize){
n+=1.0;
int xOffset = (int)(n%((double)squareSize));
int yOffset = (int)(n/((double)squareSize));
color c = img.get(i*squareSize+xOffset, j*squareSize+yOffset);
if(red(c)!=255 || green(c)!=255 || blue(c)!=255){
println(red(c)+" "+green(c)+" "+blue(c));
textArea++;
}
}
return textArea;
}
void displayArrayContents(int[][] arr){
int i=0;
println("\n now arrays");
while(i<94){
println(arr[i][0]+" "+arr[i][1]);
}
}
The pixelation algorithm that works:
PImage img;
int direction = 1;
float signal;
int squareSize = 5;
int wideness = 500;
int highness = 420;
int xDimension = wideness/squareSize;
int yDimension= highness/squareSize;
void setup() {
size(1500, 420);
noFill();
stroke(255);
frameRate(30);
img = loadImage("imageIn.jpg");
color[][] colors = new color[xDimension][yDimension];
for(int drawingNo=0; drawingNo < 3; drawingNo++){
for(int i=0; i<xDimension; i++){
for(int j=0; j<yDimension; j++){
double average = 0;
double n=0.0;
while(n < squareSize*squareSize){
n+=1.0;
int xOffset = (int)(n%((double)squareSize));
int yOffset = (int)(n/((double)squareSize));
color c = img.get(i*squareSize+xOffset, j*squareSize+yOffset);
float cube = red(c)*red(c) + green(c)*green(c) + blue(c)*blue(c);
double grayValue = (int)(sqrt(cube)*(255.0/441.0));
double nAsDouble = (double)n;
average=(grayValue + (n-1.0)*average)/n;
average=(grayValue/n)+((n-1.0)/(n))*average;
}
//average=discretize(average);
println(i+" "+j+" "+average);
colors[i][j]=color((int)average);
fill(colors[i][j]);
if(drawingNo==0){ //stroke(colors[i][j]); }
stroke(210);}
if(drawingNo==1){ stroke(150); }
if(drawingNo==2){ stroke(90); }
//stroke(colors[i][j]);
rect(drawingNo*wideness+i*squareSize,j*squareSize,squareSize,squareSize);
}
}
}
save("imageOut.jpg");
}
You're entering an infinite loop, which makes the println() statements unreliable. Fix the infinite loop, and your print statements will work again.
Look at this while loop:
while(i<94){
println(arr[i][0]+" "+arr[i][1]);
}
When will i ever become >= 94?
You never increment i, so its value is always 0. You can prove this by adding a println() statement inside the while loop:
while(i<94){
println("i: " + i);
println(arr[i][0]+" "+arr[i][1]);
}
You probably wanted to increment i inside the while loop. Or just use a for loop instead.

how to create a new QImage from an array of floats

I have an array of floats that represents an Image.(column first).
I want to show the image on a QGraphicsSecene as a QPixmap. In order to do that I tried to create anew image from my array with the QImage constructor - QImage ( const uchar * data, int width, int height, Format format ).
I first created a new unsigned char and casted every value from my original array to new unsigned char one, and then tried to create a new image with the following code:
unsigned char * data = new unsigned char[fres.length()];
for (int i =0; i < fres.length();i++)
data[i] = char(fres.dataPtr()[i]);
bcg = new QImage(data,fres.cols(),fres.rows(),1,QImage::Format_Mono);
The problem is when I try to access the information in the following way:
bcg->pixel(i,j);
I get only the value 12345.
How can I create a viewable image from my array.
Thanks
There are two problems here.
One, casting a float to a char simply rounds the float, so 0.3 may be rounded to 0 and 0.9 may be rounded to 1. For a range of 0..1, the char will only contain 0 or 1.
To give the char the full range, use a multiply:
data[i] = (unsigned char)(fres.dataPtr()[i] * 255);
(Also, your cast was incorrect.)
The other problem is that your QImage::Format is incorrect; Format_Mono expects 1BPP bitpacked data, not 8BPP as you're expecting. There are two ways to fix this issue:
// Build a colour table of grayscale
QByteArray data(fres.length());
for (int i = 0; i < fres.length(); ++i) {
data[i] = (unsigned char)(fres.dataPtr()[i] * 255);
}
QVector<QRgb> grayscale;
for (int i = 0; i < 256; ++i) {
grayscale.append(qRgb(i, i, i));
}
QImage image(data.constData(), fres.cols(), fres.rows(), QImage::Format_Index8);
image.setColorTable(grayscale);
// Use RGBA directly
QByteArray data(fres.length() * 4);
for (int i = 0, j = 0; i < fres.length(); ++i, j += 4) {
data[j] = data[j + 1] = data[j + 2] = // R, G, B
(unsigned char)(fres.dataPtr()[i] * 255);
data[j + 4] = ~0; // Alpha
}
QImage image(data.constData(), fres.cols(), fres.rows(), QImage::Format_ARGB32_Premultiplied);

Resources