How to avoid "InsufficientMemory" decoding error using Rust Image crate? - image

I am trying to read an 8K 32bit OpenEXR HDR file with Rust.
Using the Image crate to read the file:
use image::io::Reader as ImageReader;
let img = ImageReader::open(r"C:\Users\Marko\Desktop\HDR_Big.exr")
.expect("File Error")
.decode()
.expect("Decode ERROR");
This results in an Decode ERROR: Limits(LimitError { kind: InsufficientMemory })
Reading a 4K file or smaller works fine.
I thought buffering would help so I tried:
use image::io::Reader as ImageReader;
use std::io::BufReader;
use std::fs::File;
let f = File::open(r"C:\Users\Marko\Desktop\HDR_Big.exr").expect("File Error");
let reader = BufReader::new(f);
let img_reader = ImageReader::new(reader)
.with_guessed_format()
.expect("Reader Error");
let img = img_reader.decode().expect("Decode ERROR");
But the same error results.
Is this a problem with the image crate itself? Can it be avoided?
If it makes any difference for the solution after decoding the image I use the raw data like this:
let data: Vec<f32> = img.to_rgb32f().into_raw();
Thanks!

But the same error results. Is this a problem with the image crate itself? Can it be avoided?
No because it's not a problem and yes it can be avoided.
When an image library faces the open web it's relatively easy to DOS the entire service or exhaust its library cheaply as it's usually possible to request huge images at a very low cost (for instance a 44KB PNG can decompress to a 1GB full-color buffer, and a megabyte-scale jpeg can reach GB-scale buffer sizes).
As a result modern image libraries tend to set limits by default in order to limit the "default" liability of users.
That is the case of image-rs, by default it does not set any width or height limits but it does request that the allocator limits itself to 512MB.
If you wish for higher or no limitations, you can configure the decoder to match.
All of this is surfaced by simply searching for the error name and the library (both "InsufficientMemory image-rs" and "LimitError image-rs" surfaced the information)

By default, image::io::Reader asks the decoder to fit the decoding process in 512 MiB of memory, according to the documentation. It's possible to disable this limitation, using, e.g., Reader::no_limits.

Related

ESP32 websocket client frame fragmentation configuration for binary data

I am attempting to send camera frames to a web socket server from an ESP32 with a camera. I am using the ESP-IDF to implement the client web socket on the device. Specifically, I am using: https://docs.espressif.com/projects/esp-idf/en/latest/esp32/api-reference/protocols/esp_websocket_client.html.
I have noticed an issue where the server does not receive full frames from the device. I did some digging, read through RFC6455 and became aware of the concept of frame fragmentation. My thinking now is that the camera frames (640 x 480 pixels) need to be fragmented before being sent. I read through the ESP-IDF docs and only saw one mention of fragmentation.
I feel comfortable enough to implement it myself, but it seems that there is no way to set the FIN bit, OPCODE, or any other parameters of the header described by RFC6455 using the ESP-IDF web socket client library.
Does any one have any idea how these parameters can be set, or if there is some way to enable native frame fragmentation on the device?
Looking at the source code, it appears that the message is already being fragmented, if the message exceeds the size of buffer_size.
The default size of buffer_size is 1024 bytes. You can change it in the config.
const esp_websocket_client_config_t websocket_cfg = {
...
.buffer_size = 512,
};

Loading images takes my direct memory up | AS3

I'm only loading bitmaps, without even adding them to the stage, each image takes my direct memory up. Large images will take even more memory, so I'm wondering how to keep direct memory low even after loading those bitmaps, or maybe I'm doing here something wrong or missing something?
var myBitmapHolder:Bitmap;
var bitmapLoader:Loader = new Loader();
bitmapLoader.addEventListener(Event.COMPLETE, bitmapLoaded);
bitmapLoader.load(new URLRequest("myBitmap.png");
private function bitmapLoaded(e:Event):void {
myBitmapHolder = e.currentTarget.content;
}
After loading the bitmap, I'm storing it using myBitmapHolder to access it upon request. I'm using more than 30 bitmaps, works the same as the example above for each image separately.
so ... there is no free resource - you have to load all in the memory, or you have to load then unload each or few bitmaps. That will 'eat' some other resource like cpu and network traffic etc.
First you have to remove 'bitmapLoader' Event.COMPLETE listener in 'bitmapLoaded' function: bitmapLoader.removeEventListener(Event.COMPLETE, bitmapLoaded); } You have to be sure that you load the bitmap/s once. Look at this: AS3 - Memory management and What are good memory management techniques in Flash/as3. You can look at : imageDecodingPolicy also.

Directx Texture interface to existing memory

I'm writing a rendering app that communicates with an image processor as a sort of virtual camera, and I'm trying to figure out the fastest way to write the texture data from one process to the awaiting image buffer in the other.
Theoretically I think it should be possible with 1 DirectX copy from VRAM directly to the area of memory I want it in, but I can't figure out how to specify a region of memory for a texture to occupy, and thus must perform an additional memcpy. DX9 or DX11 solutions would be welcome.
So far, the docs here: http://msdn.microsoft.com/en-us/library/windows/desktop/bb174363(v=vs.85).aspx have held the most promise.
"In Windows Vista CreateTexture can create a texture from a system memory pointer allowing the application more flexibility over the use, allocation and deletion of the system memory"
I'm running on Windows 7 with the June 2010 Directx SDK, However, whenever I try and use the function in the way it specifies, I the function fails with an invalid arguments error code. Here is the call I tried as a test:
static char s_TextureBuffer[640*480*4]; //larger than needed
void* p = (void*)s_TextureBuffer;
HRESULT res = g_D3D9Device->CreateTexture(640,480,1,0, D3DFORMAT::D3DFMT_L8, D3DPOOL::D3DPOOL_SYSTEMMEM, &g_ReadTexture, (void**)p);
I tried with several different texture formats, but with no luck. I've begun looking into DX11 solutions, it's going slowly since I'm used to DX9. Thanks!

How to use audioConverterFillComplexBuffer and its callback?

I need a step by step walkthrough on how to use audioConverterFillComplexBuffer and its callback. No, don't tell me to read the Apple docs. I do everything they say and the conversion always fails. No, don't tell me to go look for examples of audioConverterFillComplexBuffer and its callback in use - I've duplicated about a dozen such examples both line for line and modified and the conversion always fails. No, there isn't any problem with the input data. No, it isn't an endian issue. No, the problem isn't my version of OS X.
The problem is that I don't understand how audioConverterFillComplexBuffer works, so I don't know what I'm doing wrong. And nothing out there is helping me understand, because it seems like nobody on Earth really understands how audioConverterFillComplexBuffer works, either. From the people who actually use it(I spy cargo cult programming in their code) to even the authors of Learning Core Audio and/or Apple itself(http://stackoverflow.com/questions/13604612/core-audio-how-can-one-packet-one-byte-when-clearly-one-packet-4-bytes).
This isn't just a problem for me, it's a problem for anybody who wants to program high-performance audio on the Mac platform. Threadbare documentation that's apparently wrong and examples that don't work are no fun.
Once again, to be clear: I NEED A STEP BY STEP WALKTHROUGH ON HOW TO USE audioConverterFillComplexBuffer plus its callback and so does the entire Mac developer community.
This is a very old question but I think is still relevant. I've spent a few days fighting this and have finally achieved a successful conversion. I'm certainly no expert but I'll outline my understanding of how it works. Note I'm using Swift, which I'm also just learning.
Here are the main function arguments:
inAudioConverter: AudioConverterRef: This one is simple enough, just pass in a previously created AudioConverterRef.
inInputDataProc: AudioConverterComplexInputDataProc: The very complex callback. We'll come back to this.
inInputDataProcUserData, UnsafeMutableRawPointer?: This is a reference to whatever data you may need to be provided to the callback function. Important because even in swift the callback can't inherit context. E.g. you may need to access an AudioFileID or keep track of the number of packets read so far.
ioOutputDataPacketSize: UnsafeMutablePointer<UInt32>: This one is a little misleading. The name implies it's the packet size but reading the documentation we learn it's the total number of packets expected for the output format. You can calculate this as outPacketCount = frameCount / outStreamDescription.mFramesPerPacket.
outOutputData: UnsafeMutablePointer<AudioBufferList>: This is an audio buffer list which you need to have already initialized with enough space to hold the expected output data. The size can be calculated as byteSize = outPacketCount * outMaxPacketSize.
outPacketDescription: UnsafeMutablePointer<AudioStreamPacketDescription>?: This is optional. If you need packet descriptions, pass in a block of memory the size of outPacketCount * sizeof(AudioStreamPacketDescription).
As the converter runs it will repeatedly call the callback function to request more data to convert. The main job of the callback is simply to read the requested number packets from the source data. The converter will then convert the packets to the output format and fill the output buffer. Here are the arguments for the callback:
inAudioConverter: AudioConverterRef: The audio converter again. You probably won't need to use this.
ioNumberDataPackets: UnsafeMutablePointer<UInt32>: The number of packets to read. After reading, you must set this to the number of packets actually read (which may be less than the number requested if we reached the end).
ioData: UnsafeMutablePointer<AudioBufferList>: An AudioBufferList which is already configured except for the actual data. You need to initialise ioData.mBuffers.mData with enough capacity to hold the expected number of packets, i.e. ioNumberDataPackets * inMaxPacketSize. Set the value of ioData.mBuffers.mDataByteSize to match.
outDataPacketDescription: UnsafeMutablePointer<UnsafeMutablePointer<AudioStreamPacketDescription>?>?: Depending on the formats used, the converter may need to keep track of packet descriptions. You need to initialise this with enough capacity to hold the expected number of packet descriptions.
inUserData: UnsafeMutableRawPointer?: The user data that you provided to the converter.
So, to start you need to:
Have sufficient information about your input and output data, namely the number of frames and maximum packet sizes.
Initialise an AudioBufferList with sufficient capacity to hold the output data.
Call AudioConverterFillComplexBuffer.
And on each run of the callback you need to:
Initialise ioData with sufficient capacity to store ioNumberDataPackets of source data.
Initialise outDataPacketDescription with sufficient capacity to store ioNumberDataPackets of AudioStreamPacketDescriptions.
Fill the buffer with source packets.
Write the packet descriptions.
Set ioNumberDataPackets to the number of packets actually read.
return noErr if successful.
Here's an example where I read the data from an AudioFileID:
var converter: AudioConverterRef?
// User data holds an AudioFileID, input max packet size, and a count of packets read
var uData = (fRef, maxPacketSize, UnsafeMutablePointer<Int64>.allocate(capacity: 1))
err = AudioConverterNew(&inStreamDesc, &outStreamDesc, &converter)
err = AudioConverterFillComplexBuffer(converter!, { _, ioNumberDataPackets, ioData, outDataPacketDescription, inUserData in
let uData = inUserData!.load(as: (AudioFileID, UInt32, UnsafeMutablePointer<Int64>).self)
ioData.pointee.mBuffers.mDataByteSize = uData.1
ioData.pointee.mBuffers.mData = UnsafeMutableRawPointer.allocate(byteCount: Int(uData.1), alignment: 1)
outDataPacketDescription?.pointee = UnsafeMutablePointer<AudioStreamPacketDescription>.allocate(capacity: Int(ioNumberDataPackets.pointee))
let err = AudioFileReadPacketData(uData.0, false, &ioData.pointee.mBuffers.mDataByteSize, outDataPacketDescription?.pointee, uData.2.pointee, ioNumberDataPackets, ioData.pointee.mBuffers.mData)
uData.2.pointee += Int64(ioNumberDataPackets.pointee)
return err
}, &uData, &numPackets, &bufferList, nil)
Again, I'm no expert, this is just what I've learned by trial and error.

API to get the graphics or video memory

I want to get the adpater RAM or graphics RAM which you can see in Display settings or Device manager using API. I am in C++ application.
I have tried seraching on net and as per my RnD I have come to conclusion that we can get the graphics memory info from
1. DirectX SDK structure called DXGI_ADAPTER_DESC. But what if I dont want to use DirectX API.
2. Win32_videocontroller : But this class does not always give you adapterRAM info if availability of video controller is offline. I have checked it on vista.
Is there any other way to get the graphics RAM?
There is NO way to directly get graphics RAM on windows, windows prevents you doing this as it maintains control over what is displayed.
You CAN, however, create a DirectX device. Get the back buffer surface and then lock it. After locking you can fill it with whatever you want and then unlock and call present. This is slow, though, as you have to copy the video memory back across the bus into main memory. Some cards also use "swizzled" formats that it has to un-swizzle as it copies. This adds further time to doing it and some cards will even ban you from doing it.
In general you want to avoid directly accessing the video card and letting windows/DirectX do the drawing for you. Under D3D1x Im' pretty sure you can do it via an IDXGIOutput though. It really is something to try and avoid though ...
You can write to a linear array via standard win32 (This example assumes C) but its quite involved.
First you need the linear array.
unsigned int* pBits = malloc( width * height );
Then you need to create a bitmap and select it to the DC.
HBITMAP hBitmap = ::CreateBitmap( width, height, 1, 32, NULL );
SelectObject( hDC, (HGDIOBJ)hBitmap );
You can then fill the pBits array as you please. When you've finished you can then set the bitmap's bits.
::SetBitmapBits( hBitmap, width * height * 4, (void*)pBits )
When you've finished using your bitmap don't forget to delete it (Using DeleteObject) AND free your linear array!
Edit: There is only one way to reliably get the video ram and that is to go through the DX Diag interfaces. Have a look at IDxDiagProvider and IDxDiagContainer in the DX SDK.
Win32_videocontroller is your best course to get the amount of gfx memory. That's how its done in Doom3 source.
You say "..availability of video controller is offline. I have checked it on vista." Under what circumstances would the video controller be offline?
Incidentally, you can find the Doom3 source here. The function you're looking for is called Sys_GetVideoRam and it's in a file called win_shared.cpp, although if you do a solution wide search it'll turn it up for you.
User mode threads cannot access memory regions and I/O mapped from hardware devices, including the framebuffer. Anyway, what you would want to do that? Suppose the case you can access the framebuffer directly: now you must handle a LOT of possible pixel formats in the framebuffer. You can assume a 32-bit RGBA or ARGB organization. There is the possibility of 15/16/24-bit displays (RGBA555, RGBA5551, RGBA4444, RGBA565, RGBA888...). That's if you don't want to also support the video-surface formats (overlays) such as YUV-based.
So let the display driver and/or the subjacent APIs to do that effort.
If you want to write to a display surface (which not equals exactly to framebuffer memory, altough it's conceptually almost the same) there are a lot of options. DX, Win32, or you may try the SDL library (libsdl).

Resources