Image memory usage - image

I have an image with 26,4KB. It is loaded by the class Frame bellow. Why does the Profiling Tool from Flex shows a usage of 1388KB for this instance of Frame.
public class Frame extends Group
{
public function Frame(source:Object)// image with 26,4K
{
var image:BitmapImage;
image = new BitmapImage();
image.smooth = true;
image.source = source;
this.addElement(image);
}
}

BitmapImages are essentially an uncompressed rectangular array containing the bytes determining pixel colors.
I imagine your input file is a JPG/JPEG, PNG, GIF? (basically, it's compressed).
Imagine an image 100px by 100px, 32bit RGBA colors (Red/Green/Blue/Alpha).
The memory requirements for this BitmapImage would be in the neighborhood of 100 * 100 * (32 / 8) (X * Y * bytesPerPixel) = 40K bytes. But that SAME image as a JPG might compress down to 3K or something. (or GIF, or PNG, etc.)
It has to be stored as a bitmap at some point so that it can be copied (blitted) to video memory for display. Perhaps flex has alternative image storage types you could use/try?

Related

How do I read an image from RAM into a consistent format?

In C++ I am used to using stb_image to load image data to and from RAM.
I am writing a rust program where I have loaded some PNGs and JPEGs as raw binary data.
I am trying to use the image crate to read and decompress the ray byte data into the data and metadata (i.e. image dimensions and raw pixel byte data). I need to then re compress the data s a png and print it to disk, to make sure the data is ok (I will use the raw buffers later on).
To that effect I have this
let image_data = image::load_from_memory(bytes).unwrap();
where bytes is the raw image data.
Problem n1, this seems to create a 3 channel image for jpegs, I need a 4 channel image for both pngs and jpegs. So for jpegs I need the image crate to add padding. But if I try to cast it using as_rgba8 I can no longer get the width and the height of the image.
Then I am trying to read the data into a custom struct, like this:
let mut raw_data = Vec::<u8>::new();
let width = image_data.width() as usize;
let height = image_data.height() as usize;
raw_data.extend_from_slice(image_data.as_bytes());
println!("{}", image_data.width());
println!("{}", image_data.height());
println!("{}", image_data.height() * image_data.width() * 3);
println!("{}", raw_data.len());
let texture =
Texture
{
width,
height,
channel_num : 4,
format : ImageFormat::RGBA8,
data : raw_data,
};
This part seems to work, next I am trying to re-compress the data dn print to disk:
let tmp = RgbaImage::from_raw(
texture.width as u32,
texture.height as u32,
texture.data.as_bytes().to_vec()).unwrap();
tmp.save("tmp.png");
In this case I am getting a None error on attempting the unwrap. I don't understand why since the byte buffer does have enough data to contain the full image, it was literally created by that image.
I am somewhat lost.
[...] this seems to create a 3 channel image for jpegs, I need a 4 channel image for both pngs and jpegs. [...] But if I try to cast it using as_rgba8 I can no longer get the width and the height of the image.
You want to convert the underlying image buffer, rather than cast the image. This is done with the to_* family of methods in DynamicImage. This works regardless of whether the dynamic image was obtained from a file or from memory (both open and load_from_memory return a DynamicImage).
use image::{RgbaImage, open}; // 0.24.3
let img = open("example.jpg")?;
println!(
"Before: {}x{} {:?} ({} channels)",
img.width(),
img.height(),
img.color(),
img.color().channel_count()
);
let img: RgbaImage = img.to_rgba8();
println!(
"After: {}x{} ({} channels)",
img.width(),
img.height(),
img.sample_layout().channels
);
Note how the second img is already known at compile time to be an RGBA image. As an image buffer, you can freely retrieve any property or pixel data that you wish.
Possible output:
Before: 2864x2480 Rgb8 (3 channels)
After: 2864x2480 (4 channels)

Emgu CV Image width not multiple of 4

i am trying to do some image processing with Emgu CV.
I have raw pixel data of a gray image in an byte[]. I know the size of the image and the type.
I first create an image width the known size an type and then i want to load the date to the image (I have often done this in C++ OpenCV).
Image<Gray, Byte> image = new Image<Gray, byte>(width, height);
image.Bytes = data;
But the image is always kind of "cut through and puzzled together". It works with an image which width%4 = 0. That is why i assume it is some kind of "memory alignment" issue.
Has someone of you run into this problem and knows how to fix it?
Thanks,
Sebastian
Both your raw data and Image.Bytes are continuous pieces of memory. But every line of your array takes Width bytes, but line of Image takes Image.widthStep (I talk about IPLImage structure, don't know about Emgu pecularities) bytes, this value aligned for 4 bytes (99=>100 etc).
So you can or copy Width bytes for Yth line to Bytes[Y * widthStep] (height times), or create special 4-bytes-aligned array to collect raw data, then assign it to Image.Bytes
try using another constructor:
Image<Gray, Byte> image = new Image<Gray, byte>(width, height, bytesNum, data);
See also http://www.emgu.com/wiki/files/2.1.0.0/html/1d51c1ed-6f5e-65f9-206c-ef2d59c5c117.htm

GraphicsView fitInView() very pixelated result when downshrinking

I have searched everywhere and i cannot find any solution after 2 days of trying.
The Problem:
I'm doing an image Viewer with "Fit Image to View" feature. I load a picture of say 3000+ pixels in my GraphicsView (which is a lot smaller ofcourse), scrollbars appear that's good. When i click my btnFitView and executed:
ui->graphicsView->fitInView(scene->sceneRect(),Qt::KeepAspectRatio);
This is down scaling right? After fitInView() all lines are pixelated. It looks like a saw went over the lines on the image.
For example: image of a car has lines, image of a texbook (letters become in very bad quality).
My code sample:
// select file, load image in view
QString strFilePath = QFileDialog::getOpenFileName(
this,
tr("Open File"),
"/home",
tr("Images (*.png *.jpg)"));
imageObject = new QImage();
imageObject->load(strFilePath);
image = QPixmap::fromImage(*imageObject);
scene = new QGraphicsScene(this);
scene->addPixmap(image);
scene->setSceneRect(image.rect());
ui->graphicsView->setScene(scene);
// on_btnFitView_Clicked() :
ui->graphicsView->fitInView(scene->sceneRect(),Qt::KeepAspectRatio);
Just before fitInView(), sizes are:
qDebug()<<"sceneRect = "<< scene->sceneRect();
qDebug()<<"viewRect = " << ui->graphicsView->rect();
sceneRect = QRectF(0,0 1000x750)
viewRect = QRect(0,0 733x415)
If it is necessary i can upload screenshots of original loaded image and fitted in view ?
Am i doing this right? It seems all examples on the Web work with fitInView for auto-fitting. Should i use some other operations on the pixmap perhaps?
SOLUTION
// LOAD IMAGE
bool ImgViewer::loadImage(const QString &strImagePath)
{
m_image = new QImage(strImagePath);
if(m_image->isNull()){
return false;
}
clearView();
m_pixmap = QPixmap::fromImage(*m_image);
m_pixmapItem = m_scene->addPixmap(m_pixmap);
m_scene->setSceneRect(m_pixmap.rect());
this->centerOn(m_pixmapItem);
// preserve fitView if active
if(m_IsFitInView)
fitView();
return true;
}
// TOGGLED FUNCTIONS
void ImgViewer::fitView()
{
if(m_image->isNull())
return;
this->resetTransform();
QPixmap px = m_pixmap; // use local pixmap (not original) otherwise image is blurred after scaling the same image multiple times
px = px.scaled(QSize(this->width(),this->height()),Qt::KeepAspectRatio,Qt::SmoothTransformation);
m_pixmapItem->setPixmap(px);
m_scene->setSceneRect(px.rect());
}
void ImgViewer::originalSize()
{
if(m_image->isNull())
return;
this->resetTransform();
m_pixmap = m_pixmap.scaled(QSize(m_image.width(),m_image.height()),Qt::KeepAspectRatio,Qt::SmoothTransformation);
m_pixmapItem->setPixmap(m_pixmap);
m_scene->setSceneRect(m_pixmap.rect());
this->centerOn(m_pixmapItem); //ensure item is centered in the view.
}
On downshrink this produces good quality. Here are some stats after calling these 2 functions:
// "originalSize()" : IMAGE SIZE = (1152, 2048)
// "originalSize()" : PIXMAP SIZE = (1152, 2048)
// "originalSize()" : VIEW SIZE = (698, 499)
// "originalSize()" : SCENE SIZE = (1152, 2048)
// "fitView()" : IMAGE SIZE = (1152, 2048)
// "fitView()" : PIXMAP SIZE = (1152, 2048)
// "fitView()" : VIEW SIZE = (698, 499)
// "fitView()" : SCENE SIZE = (280, 499)
There is a problem now, after call to fitView() look the size of scene? Much smaller.
And if fitView() is activated, and I now scale the image on wheelEvent (zoomIn/zoomOut), with the views scale function: scale(factor,factor); ..produces terrible result.
This doesn't happen with originalSize() where scene size is equal to image size.
Think of the view as a window into the scene.
Moving the view large amounts, either zooming in or out, will likely create images that don't look great. Rather than the image being scaled as you would expect, the view is just moving away from the scene and doing its best to render the image, but the image has not been scaled, just transformed in the scene.
Rather than using QGraphicsView::fitInView, keep the main image in memory and create a scaled version of the image with QPixamp::scaled, each time FitInView is selected, or the user zooms in / out. Then set this QPixmap on the QGraphicsPixmapItem with setPixmap.
You may also want to think about dropping the scroll bars and allowing the user to drag the image around the screen, which provides a better user interface, in my opinion; though of-course it depends on your requirements.

Reassembling a fragmented image

I have an image that has been broken in to parts, 64 rows by 64 columns. Each image is 256x256px. The images are all PNG. They are named "Image--.png" for example "Image-3-57". The rows and columns numbering start from 0 rather than 1.
How can I assemble this back in to one image? Ideally using BASH and tools (I'm a sysadmin) though PHP would be acceptable as well.
Well, it is not very complicated, if you want to use PHP. What you need is just a few image gunctions - imagecreate and imagecopy. If your PNG is semi transparent, you will also need imagefilledrectangle to create a transparent background.
In code below, I rely on fact, that all chunks are same size - so the pixel size must be able to be divided by the number of chunks.
<?php
$width = 256*64; //height of the big image, pixels
$height = 256*64;
$chunks_X = 64; //Number of chunks
$chunks_Y = 64; //Same for Y
$chuk_size_X = $width/$chunks_X; //Compute size of one chunk, will be needed in copying
$chuk_size_Y = $height/$chunks_Y;
$big = imagecreate($width, $height); //Create the big one
for($y=0; $y<$chunks_Y; $y++) {
for($x=0; $x<chunks_X; $x++) {
$chunk = imagecreatefrompng("Image-$x-$y.png");
imagecopy($big, $chunk,
$x*$chuk_size_X, //position where to place little image
$y*$chuk_size_Y,
0, //where to copy from on little image
0,
$chuk_size_X, //size of the copyed area - whole little image here
$chuk_size_Y,
);
imagedestroy($chunk); //Don't forget to clear memory
}
}
?>
This is just a draft. I'm not sure about all theese xs and ys as well as ather details. It is late and I'm tired.

Prevent GDI+ PNG Encoder from adding Gamma information to a 1-bit PNG

I wonder if its possible to instruct the Imaging PNG Encoder not to add any gamma and chroma informations to a 1-bit PNG.
I am creating a 2 color palette for the image
ColorPalette* pal = (ColorPalette*)CoTaskMemAlloc(sizeof(ColorPalette) + 2 * sizeof(ARGB));
pal->Count = 2;
pal->Flags = 0;
pal->Entries[0] = MAKEARGB(0,0,0,0);
pal->Entries[1] = MAKEARGB(0,255,255,255);
if (FAILED(res = sink->SetPalette(pal))) {
return res;
}
CoTaskMemFree(pal);
and then just
BitmapData bmData;
bmData.Height = bm.bmHeight;
bmData.Width = bm.bmWidth;
bmData.Scan0 = bm.bmBits;
bmData.PixelFormat = PixelFormat1bppIndexed;
UINT bitsPerLine = imageInfo.Width * bm.bmBitsPixel;
UINT bitAlignment = sizeof(LONG) * 8;
UINT bitStride = bitAlignment * (bitsPerLine / bitAlignment); // The image buffer is always padded to LONG boundaries
if ((bitsPerLine % bitAlignment) != 0) bitStride += bitAlignment; // Add a bit more for the leftover values
bmData.Stride = bitStride / 8;
if (FAILED(res = sink->PushPixelData(&rect, &bmData, TRUE))) {
return res;
}
The resulting PNG image is way to large and contains the following useless headers:
sRGB,gAMA,cHRM
I was actually only expecting PLTE not sRGB. How do I have to setup the encoder to skip gamma and chroma calculations?
I'm also interested to know if it's possible. I use gdi+ in a c++ program to generate png's for a website and the png's have different colors than the css although I put in the exact same values. By removing the sRGB stuff, that could solve the gamma problem in most browsers.
I hope there is a solution for this!
I have resolved this by implementing the FreeImage library (http://freeimage.sourceforge.net/).
I create the bitmap with GDI+, I lock its pixeldata, create a freeimage bitmap, lock it too and copy the pixels.
Then I make freeimage save it to a PNG and voila... correct gamma information, good in every browser.
It's a little bit more overhead (although I have a feeling that FreeImage saves the images much faster than GDI+, making the overal process even faster). But of course, you will need an extra library with dll with your project.

Resources