Halide function failing with an error `Input buffer b0 is accessed at 2, which is beyond the max (-1) in dimension 2` - halide

I'm converting COCO2017 dataset to RAW Bayer format using Approx-Vision library. It works just fine for most of the images, but fails for other images.
I am using pipeline_V2.cpp which is run by this python script. For some images it fails with the following error:
root#167545c2c5e4:/approx-vision/pipelines/common# ./pipeline_V2.o /datasets/000000431848.png /datasets/
Error at ./pipeline_V2.cpp:153:
Input buffer b0 is accessed at 2, which is beyond the max (-1) in dimension 2
Aborted (core dumped)
Does any one know why that is happpeing or how to fix it?
Normally it is supposed to output a RAW BAYER image in .png format.

The problem originates from image channels. The image must be 3 channeled of type [HxWxC] - where C is a number of channels.

Related

How to avoid "InsufficientMemory" decoding error using Rust Image crate?

I am trying to read an 8K 32bit OpenEXR HDR file with Rust.
Using the Image crate to read the file:
use image::io::Reader as ImageReader;
let img = ImageReader::open(r"C:\Users\Marko\Desktop\HDR_Big.exr")
.expect("File Error")
.decode()
.expect("Decode ERROR");
This results in an Decode ERROR: Limits(LimitError { kind: InsufficientMemory })
Reading a 4K file or smaller works fine.
I thought buffering would help so I tried:
use image::io::Reader as ImageReader;
use std::io::BufReader;
use std::fs::File;
let f = File::open(r"C:\Users\Marko\Desktop\HDR_Big.exr").expect("File Error");
let reader = BufReader::new(f);
let img_reader = ImageReader::new(reader)
.with_guessed_format()
.expect("Reader Error");
let img = img_reader.decode().expect("Decode ERROR");
But the same error results.
Is this a problem with the image crate itself? Can it be avoided?
If it makes any difference for the solution after decoding the image I use the raw data like this:
let data: Vec<f32> = img.to_rgb32f().into_raw();
Thanks!
But the same error results. Is this a problem with the image crate itself? Can it be avoided?
No because it's not a problem and yes it can be avoided.
When an image library faces the open web it's relatively easy to DOS the entire service or exhaust its library cheaply as it's usually possible to request huge images at a very low cost (for instance a 44KB PNG can decompress to a 1GB full-color buffer, and a megabyte-scale jpeg can reach GB-scale buffer sizes).
As a result modern image libraries tend to set limits by default in order to limit the "default" liability of users.
That is the case of image-rs, by default it does not set any width or height limits but it does request that the allocator limits itself to 512MB.
If you wish for higher or no limitations, you can configure the decoder to match.
All of this is surfaced by simply searching for the error name and the library (both "InsufficientMemory image-rs" and "LimitError image-rs" surfaced the information)
By default, image::io::Reader asks the decoder to fit the decoding process in 512 MiB of memory, according to the documentation. It's possible to disable this limitation, using, e.g., Reader::no_limits.

Why does Trackpy give me an error when I try to compute the overall drift speed?

I'm going through the Trackpy walkthrough (http://soft-matter.github.io/trackpy/v0.3.0/tutorial/walkthrough.html) but using my own pictures. When I get to calculating the overall drift velocity, I get this error and I don't know what it means:drift error
I don't have a ton of coding experience so I'm not even sure how to look at the source code to figure out what's happening.
Your screenshot shows the traceback of the error, i.e. you called a function, tp.compute_drift(), but this function called another function, pandas_sort(), which called another function, etc until raise ValueError(msg) is called, which interrupts the chain. The last line is the actual error message:
ValueError: 'frame' is both an index level and a column label, which is ambiguous.
To understand it, you have to know that Trackpy stores data in DataFrame objects from the pandas library. The tracking data you want to extract drift motion from is stored in such an object, t2. If you print t2 it will probably look like this:
y x mass ... ep frame particle
frame ...
0 46.695711 3043.562648 3.881068 ... 0.007859 0 0
3979 3041.628299 1460.402493 1.787834 ... 0.037744 0 1
3978 3041.344043 4041.002275 4.609833 ... 0.010825 0 2
The word "frame" is the title of two columns, which confuses the sorting algorithm. As the error message says, it is ambiguous to sort the table by frame.
Solution
The index (leftmost) column does not need a name here, so remove it with
t2.index.name = None
and try again. Check if you have the newest Trackpy and Pandas versions.

Sampling using new VideoReader readFrame() function in MATLAB [duplicate]

I am trying to process a video in Matlab that I read in using VideoReader. I can process the the frames without a problem, but I only want to process every fifth frame. I tried using the step function but this doesn't work on my videoreader object. Right now I can call readFrame five times, but this obviously slows down the whole process (its a lot of video material). How can I efficiently skip five frames, process five frame, skip another five, ... using Matlab?
Error message:
Undefined function 'step' for input arguments of type 'VideoReader'.
However, calling the help function on step gets me this example:
WORKED=step(VR,DELTA)
Moves the frame counter by DELTA frames for video VR. This is a
generalization of NEXT. Returns 0 on an unsuccessful step. Note that
not all plugins support stepping, especially with negative numbers. In
the following example, both IM1 and IM2 should be the same for most
plugins.
vr = videoReader(...myurl...);
if (~next(vr)), error('couldn''t read first frame'); end
im1 = getframe(vr);
if (~step(vr,-1)), error('could not step back to frame 0'); end
im2 = getframe(vr);
if (any(im1 ~= im2)),
error('first frame and frame 0 are not the same');
end
vr = close(vr);
FNUM should be an integer.
After the videoReader constructor is called, NEXT, SEEK, or step should
be called at least once before GETFRAME is called.
Here, step is clearly called on a VideoReader object, is it not? Help would be greatly appreciated.
I've had this issue too. Without using deprecated code, the only way to do what you are trying is to call readFrame five times for every output frame. This is slow and very inefficient. However, if you use the deprecated read method (and assuming your video is a file rather than a stream), you can specify a frame number as well. I don't know why The MathWorks have gone backwards on this. I suggest that you file a service request to ask about it and say why this functionality is important to you.
In the meantime, you can try out my frame2jpg function that extracts particular frames from a video file. It tries to use the deprecated read method and falls back to readFrame if that fails. I've found the read method to be ten times faster in my own application with 1080p 60 fps MPEG-4 video. Feel free to modify the code to suit your your needs.
Don`t know if this is still of use but I´ve found a way to work around the issue.
As the readFrame reads the CURRENT frame, provided by the vid.CurrentTime property you can simply advance the property by the amount of frames you want to skip.
vid = VideoReader('myvid.mpeg')
vidFig = figure();
currAxes = axes;
n = 10;
while hasFrame(vid)
vidFrame = readFrame(vid);
vid.CurrentTime = vid.CurrentTime + n/vid.FrameRate;
image(vidFrame, 'Parent', currAxes);
currAxes.Visible = 'off';
end
Changing the value of n makes the video skip the amount of frames through every loop. I hope this helped.

Matlab cannot natively read multipage tiff files beyond 2^16 frames

I am trying to read a multipage tiff which is 128 pixels x 128 pixels x 122000 frames. Reading the file with the following code:
InfoImage=imfinfo(fname);
mImage=InfoImage(1).Width;
nImage=InfoImage(1).Height;
NumberImages=length(InfoImage);
image=zeros(nImage,mImage,NumberImages,'uint16');
TifLink = Tiff(fname, 'r');
for i=1:NumberImages
TifLink.setDirectory(i);
image(:,:,i)=TifLink.read();
end
TifLink.close();
produces the following error:
Error using tifflib
Input argument out of range.
Error in Tiff/setDirectory (line 1277)
tifflib('setDirectory',obj.FileID,dirNum-1);
Error in TiffReader (line 18)
TifLink.setDirectory(i);
at exactly i = 65537, or 2^16 + 1.
It seems that MATLAB thinks that a tiff cannot possibly be longer than 65536 frames, which is clearly not the case, because I have one which opens just fine in ImageJ.
Does anyone know what the problem might be?
The TIFFStack library for Matlab is able to import these files.
https://github.com/DylanMuir/TIFFStack

How to use output of CIFilter recursively as new input?

I've written an own CIFilter kernel which is doing some image processing on the camera signal. It takes two arguments:
Argument one is "inputImage" (the current camera image) argument 2 is "backgroundImage" which is being initialized with the first camera image.
The filter is supposed to work recursively. The result of the filter should be used as new "backgroundImage" in the next iteration. I am calculating a background image and some variances and therefore need the result from the previous render.
Unfortunately I cannot use the output CIImage of the CIFilter in the next iteration, because the memory load gets up and up. After 10 seconds of processing it ends up with 1.4GB of RAM usage. Using the filter in a standard manner (without recursion) memory management is fine.
How can I reuse the output of a filter as input in the next iteration?
I've done a NSLog on the result image. Ant it told me
background {
CISampler:0x1002e0360 image {
FEPromise: 0x1013aa230 extent [0 0 1280 720]; DOD [0 0 1280 720]; filter MyFeatureDetectFilter: 0x101388dd0; kernel coreImageKernel; image {
CISampler:0x10139e200 image {
FEBufferImage: 0x10139bee0 metadata.colorspace: HDTV; extent: [0 0 1280 720]; format: BGRA_8; uid 5
}
After some seconds the log becomes sth. like
}
}
}
}
}
This tells me that CIImages are 'always' prototypes of the desired operation. And using them recursively adds just the "resulting CIImage 'prototype'" as input into the new 'prototype'.
Over time the "rule" for rendering blows up into a huge structure of nested prototypes.
Is there any way to force CIImages to flatten the structure inside memory?
I would be happy if I could do recursive processing, because this would blow up the power of QuartzCore to the extreme.
I tried the same in QuartzComposer. Connecting the output with the input works, but takes a lot of memory, too. After some time it crashes. Then I tried to use the Queue from QC and everything worked fine. What is the "xcode" equivalent of the QC Queue? Or is there any mechanism to rewrite my kernel to keep "results" in memory for the next iteration?
It seems like what you're looking for is the CIImageAccumulator class. This allows you to use the output of a filter as its input on the next iteration.
Edit:
For an example of how to use it, you can check out this Apple sample code.

Resources