Loading images takes my direct memory up | AS3 - performance

I'm only loading bitmaps, without even adding them to the stage, each image takes my direct memory up. Large images will take even more memory, so I'm wondering how to keep direct memory low even after loading those bitmaps, or maybe I'm doing here something wrong or missing something?
var myBitmapHolder:Bitmap;
var bitmapLoader:Loader = new Loader();
bitmapLoader.addEventListener(Event.COMPLETE, bitmapLoaded);
bitmapLoader.load(new URLRequest("myBitmap.png");
private function bitmapLoaded(e:Event):void {
myBitmapHolder = e.currentTarget.content;
}
After loading the bitmap, I'm storing it using myBitmapHolder to access it upon request. I'm using more than 30 bitmaps, works the same as the example above for each image separately.

so ... there is no free resource - you have to load all in the memory, or you have to load then unload each or few bitmaps. That will 'eat' some other resource like cpu and network traffic etc.
First you have to remove 'bitmapLoader' Event.COMPLETE listener in 'bitmapLoaded' function: bitmapLoader.removeEventListener(Event.COMPLETE, bitmapLoaded); } You have to be sure that you load the bitmap/s once. Look at this: AS3 - Memory management and What are good memory management techniques in Flash/as3. You can look at : imageDecodingPolicy also.

Related

How to avoid "InsufficientMemory" decoding error using Rust Image crate?

I am trying to read an 8K 32bit OpenEXR HDR file with Rust.
Using the Image crate to read the file:
use image::io::Reader as ImageReader;
let img = ImageReader::open(r"C:\Users\Marko\Desktop\HDR_Big.exr")
.expect("File Error")
.decode()
.expect("Decode ERROR");
This results in an Decode ERROR: Limits(LimitError { kind: InsufficientMemory })
Reading a 4K file or smaller works fine.
I thought buffering would help so I tried:
use image::io::Reader as ImageReader;
use std::io::BufReader;
use std::fs::File;
let f = File::open(r"C:\Users\Marko\Desktop\HDR_Big.exr").expect("File Error");
let reader = BufReader::new(f);
let img_reader = ImageReader::new(reader)
.with_guessed_format()
.expect("Reader Error");
let img = img_reader.decode().expect("Decode ERROR");
But the same error results.
Is this a problem with the image crate itself? Can it be avoided?
If it makes any difference for the solution after decoding the image I use the raw data like this:
let data: Vec<f32> = img.to_rgb32f().into_raw();
Thanks!
But the same error results. Is this a problem with the image crate itself? Can it be avoided?
No because it's not a problem and yes it can be avoided.
When an image library faces the open web it's relatively easy to DOS the entire service or exhaust its library cheaply as it's usually possible to request huge images at a very low cost (for instance a 44KB PNG can decompress to a 1GB full-color buffer, and a megabyte-scale jpeg can reach GB-scale buffer sizes).
As a result modern image libraries tend to set limits by default in order to limit the "default" liability of users.
That is the case of image-rs, by default it does not set any width or height limits but it does request that the allocator limits itself to 512MB.
If you wish for higher or no limitations, you can configure the decoder to match.
All of this is surfaced by simply searching for the error name and the library (both "InsufficientMemory image-rs" and "LimitError image-rs" surfaced the information)
By default, image::io::Reader asks the decoder to fit the decoding process in 512 MiB of memory, according to the documentation. It's possible to disable this limitation, using, e.g., Reader::no_limits.

Memory keep growing if I load a new dicom file

Every time I upload a dicom file, the memory grow up. How can I clean the memory of the old ones
You can see various examples of how to free the memory on the examples code, example on the loader:
let loader = new LoadersVolume();
loader.free(); // Free memory
loader = null;
Another one:
let stackHelper = new HelpersStack();
stackHelper.dispose(); // Free memory
stackHelper = null;
I suggest to read the following document to know how garbage collection works on most browsers.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Memory_Management
The garbage collector deletes from the memory everything that is not referenced somewhere.
Having a high use of memory even if you don't you the object anymore means there still a reference to it somewhere. Look for a variable that can still access to your old data, including the 3D scene, the AMI stackHelper, the AMI loader...

Out of Memory exception in Windows Mobile project

Dear programmers, i wrote a program wich target a Windows Mobile platform (NetCF 3.5).
My programm has a method of answers check and this method show dynamically created pictureboxes, textboxes and images in new form. Here is a method logic:
private void ShowAnswer()
{
PictureBox = new PictureBox();
PictureBox.BackColor = Color.Red;
PictureBox.Location = new Point(x,y);
PictureBox.Name = "Name";
PictureBox.Size = Size(w,h);
PictureBox.Image = new Bitmap(\\Image01.jpg);
}
My problem is in memory leaks or something. If the user work with a programm aproximately 30 minutes and run the ShowAnswer() method several times, Out of memry exception appears. I know that the reason may be in memory allocation of bitmaps, but i even handle the ShowAnswers form closing event and manually trying to release all controls resources and force a garbage collector:
foreach(Control cntrl in this.Controls)
{
cntrl.Dispose();
GC.Collect();
}
It seems like everything collects and disposes well, every time i check the taskmanager on my windows mobile device during the programm tests and see that memory were released and child form was closed properly, but in every ShowAnswer() method call and close i see a different memory amount in device taskmanager (somtimes it usues 7.5 Mb, sometimes 11.5, sometimes 9.5) any time its different, but it seems like sometimes when the method start to run as usual memory is not allocated and Out of memory exception appears.. Please advice me how to solve my problem.. Maybe i should use another Dispose methods, or i should set bitmap another way.. thank you in advance!!!
Depending on how you're handling the form generation, you might need to dispose of the old Image before loading a new one.
private void ShowAnswer()
{
PictureBox = new PictureBox();
PictureBox.BackColor = Color.Red;
PictureBox.Location = new Point(x,y);
PictureBox.Name = "Name";
PictureBox.Size = Size(w,h);
if(PictureBox.Image != null) //depending on how you construct the form
PictureBox.Image.Dispose();
PictureBox.Image = new Bitmap(\\Image01.jpg);
}
However, you should also check before you load the image that it's not so obscenely large that it munches up all of your device's memory.
Edit: I don't just mean the size of the compressed image in memory - I also mean the physical size of the image (height & width). The Bitmap will create an uncompressed image that will take up much, much more memory than is resident on storage memory (height*width*4). For a more in-depth explanation, check out the following SO question:
OutOfMemoryException loading big image to Bitmap object with the Compact Framework

API to get the graphics or video memory

I want to get the adpater RAM or graphics RAM which you can see in Display settings or Device manager using API. I am in C++ application.
I have tried seraching on net and as per my RnD I have come to conclusion that we can get the graphics memory info from
1. DirectX SDK structure called DXGI_ADAPTER_DESC. But what if I dont want to use DirectX API.
2. Win32_videocontroller : But this class does not always give you adapterRAM info if availability of video controller is offline. I have checked it on vista.
Is there any other way to get the graphics RAM?
There is NO way to directly get graphics RAM on windows, windows prevents you doing this as it maintains control over what is displayed.
You CAN, however, create a DirectX device. Get the back buffer surface and then lock it. After locking you can fill it with whatever you want and then unlock and call present. This is slow, though, as you have to copy the video memory back across the bus into main memory. Some cards also use "swizzled" formats that it has to un-swizzle as it copies. This adds further time to doing it and some cards will even ban you from doing it.
In general you want to avoid directly accessing the video card and letting windows/DirectX do the drawing for you. Under D3D1x Im' pretty sure you can do it via an IDXGIOutput though. It really is something to try and avoid though ...
You can write to a linear array via standard win32 (This example assumes C) but its quite involved.
First you need the linear array.
unsigned int* pBits = malloc( width * height );
Then you need to create a bitmap and select it to the DC.
HBITMAP hBitmap = ::CreateBitmap( width, height, 1, 32, NULL );
SelectObject( hDC, (HGDIOBJ)hBitmap );
You can then fill the pBits array as you please. When you've finished you can then set the bitmap's bits.
::SetBitmapBits( hBitmap, width * height * 4, (void*)pBits )
When you've finished using your bitmap don't forget to delete it (Using DeleteObject) AND free your linear array!
Edit: There is only one way to reliably get the video ram and that is to go through the DX Diag interfaces. Have a look at IDxDiagProvider and IDxDiagContainer in the DX SDK.
Win32_videocontroller is your best course to get the amount of gfx memory. That's how its done in Doom3 source.
You say "..availability of video controller is offline. I have checked it on vista." Under what circumstances would the video controller be offline?
Incidentally, you can find the Doom3 source here. The function you're looking for is called Sys_GetVideoRam and it's in a file called win_shared.cpp, although if you do a solution wide search it'll turn it up for you.
User mode threads cannot access memory regions and I/O mapped from hardware devices, including the framebuffer. Anyway, what you would want to do that? Suppose the case you can access the framebuffer directly: now you must handle a LOT of possible pixel formats in the framebuffer. You can assume a 32-bit RGBA or ARGB organization. There is the possibility of 15/16/24-bit displays (RGBA555, RGBA5551, RGBA4444, RGBA565, RGBA888...). That's if you don't want to also support the video-surface formats (overlays) such as YUV-based.
So let the display driver and/or the subjacent APIs to do that effort.
If you want to write to a display surface (which not equals exactly to framebuffer memory, altough it's conceptually almost the same) there are a lot of options. DX, Win32, or you may try the SDL library (libsdl).

IronPython memory usage

I'm hosting IronPython in a c#-based WebService to be able to provide custom extension scripts. However, I'm finding that memory usage sharply increases when I do simple load testing by executing the webservice repeatedly in a loop.
IronPython-1.1 implemented IDisposable on its objects so that you can dispose of them when they are done. The new IronPython-2 engine based on the DLR has no such concept.
From what I understood, everytime you execute a script in the ScriptEngine a new assembly is injected in the appdomain and can't be unloaded.
Is there any way around this?
You could try creating a new AppDomain every time you run one of your IronPython scripts. Although assebmlies cannot be unloaded from memory you can unload an AppDomain and this will allow you to get the injected assembly out of memory.
You need to disable the optimized code generation:
var runtime = Python.CreateRuntime();
var engine = runtime.GetEngine("py");
PythonCompilerOptions pco = (PythonCompilerOptions)engine.GetCompilerOptions();
pco.Module &= ~ModuleOptions.Optimized;
// this shouldn't leak now
while(true) {
var code = engine.CreateScriptSourceFromString("1.0+2.0").Compile(pco);
code.Execute();
}
Turns out, after aspnet_wp goes to about 500mb, the garbage collector kicks in and cleans out the mess. The memory usage then drops to about 20mb and steadily starts increasing again during load testing.
So there's no memory 'leak' as such.

Resources