I'm using OpenGL for implementing some screen space filters. For debugging purposes, I would like to save a bunch of textures so a can compare individual pixel values. The problem is that these 16-bit float textures have negative values. Do you know of any image file formats that support negative values? How could I export them?
Yes there are some such formats ... What you need is to use non clamped floating formats. This is what I am using:
GL_LUMINANCE32F_ARB
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE32F_ARB, size, size, 0, GL_LUMINANCE, GL_FLOAT, data);
Here an example:
raytrace through 3D mesh
In there I am passing geometry in a float texture that is not clamped so it has full range not just <0,1> As you can see negative range is included too ...
There are few others but once I found the one was no point to look for more of them...
You can verify clamping using the GLSL debug prints
Also IIRC there was some function to set the clamping behavior of OpenGL but never used it (if my memory serves well).
[Edit1] exporting to file
this is a problem and unless yo are exporting to ASCII or math matrix files I do not know of any format supporting negative pixel values...
So you need to workaround... so simply convert range to non negative ones like this:
float max = ???; //some value that is safely bigger than biggest abs value in your images
for (each pixel x,y)
{
pixel(x,y) = 0.5*(max + pixel(x,y))
}
and then converting back if needed
for (each pixel x,y)
{
pixel(x,y) = (2.0*pixel(x,y)-max);
}
another common way is to normalize to range <0,1> which has the advantage that from RGB colors you can infere 3D direction... (like on normal/bump maps)
pixel(x,y) = 0.5 + 0.5*pixel(x,y)/max;
and back
pixel(x,y) = 2.0*max*pixel(x,y)-max;
Related
I am wanting to scale grayscale images (input masks, really) with discrete values up smoothly. The values in these images are indexes that represent arbitrary concepts (e.g. "terrain types"; they are usually indices into a table), rather than values on a continuous scale, so they can't be averaged or blended in any way.
Do there exist algorithms that can do this with a more pleasing result than nearest-neighbour, which results in a very blocky, pixelated result? I am looking for something that will at least produce more rounded, more fluid results. The kind of thing that would be ideal would be a whitepaper, or a library (preferably in Java).
I've researched the subject, but I can't find anything. There is plenty about linear or cubic interpolation, etc., but that won't work for indexed values. The only algorithm I ever see mentioned that does not try to average values is nearest-neighbour. But there must be more?
Using colour here for clarity. I do of course understand that the preferred result here is impossible; I'm not asking for something that reconstitutes destroyed information, just hoping for something that will at least guestimate something smoother than the first result.
Scan the destination image and for every corresponding source pixel (non-integer coordinates) check if the colors of the four surrounding pixels are the same. If yes, assign that color.
If not, perform as many bilinear interpolations as there are different colors. For this assign the weight 1 for a given color (each in turn) and 0 for the others, and interpolate the weight. Finally, keep the color with the largest weight.
By analytical geometry, one can show that in bilinear interpolation, the iso-weight curves are arcs of hyperbola. If your magnification is large, you will see them. G1 continuity is not guaranteed. If this is an annoyance, you can work with G1 bicubic interpolation instead.
If this still does not satisfy you, you can try smooth approximating surfaces rather than interpolating ones. But the principle of keeping the color of maximum weight remains.
If there aren't many distinct colors and you want to use ready-made functions, you can work this out as follows:
split the image in several binary images (white for a chosen color, black for background);
magnify all images (to grayscale) using the favorite method;
now implement yourself a function that assigns every pixel the color that has the largest value among the magnified images.
You can also apply a smoothing filter to the binary images before or after magnification.
For the sake of illustration, here is what you would get with two colors at a time (but this easily generalizes).
Color source image:
Smoothing applied to the binary equivalents:
Magnified:
Maximum weight decision:
One thing you could try is to extract a polygon for the boundary of each uniformly-colored region, then upscale and draw the polygon in the output image. You won’t create neatly rounded edges, but you will avoid the stair-case effect of the nearest neighbor interpolation. Upscaling polygons should avoid gaps between the regions too.
I guess that smoothing the shape for each value individually is a way to avoid undesired mixed value.
To handle values individually, here, I started with your nearest-neighbour image v, and create 3 image { A.bmp, B.bmp, C.bmp } by hand.
(each image has only 1 color region and background is black. e.g. A.bmp is below:)
After smoothing the shape for each image, draw these shapes to one result image buffer with different color.
//I use C++ and OpenCV
int main()
{
const std::string FileNames[3] = { "A.bmp", "B.bmp", "C.bmp" };
const cv::Scalar ResultShowColor[3] = { cv::Scalar(0,255,255), cv::Scalar(0,255,0), cv::Scalar(0,0,255) };
cv::Mat Imgs[3];
const int KernelSize = 15;
for( int i=0; i<3; ++i )
{
Imgs[i] = cv::imread( FileNames[i], cv::IMREAD_GRAYSCALE );
if( Imgs[i].empty() )return 0;
cv::threshold( Imgs[i], Imgs[i], 32, 255, cv::THRESH_BINARY );
cv::GaussianBlur( Imgs[i], Imgs[i], cv::Size(KernelSize,KernelSize), 0 );
cv::threshold( Imgs[i], Imgs[i], 255*0.5, 255, cv::THRESH_BINARY );
cv::imshow( FileNames[i], Imgs[i] );
}
cv::Mat ResultImg = cv::Mat::zeros( Imgs[0].size(), CV_8UC3 );
for( int i=0; i<3; ++i )
{
ResultImg.setTo( ResultShowColor[i], Imgs[i] );
}
cv::imshow( "ResultImg", ResultImg );
if( cv::waitKey() == 's' ){ cv::imwrite( "ResultImg.png", ResultImg ); }
return 0;
}
This is result:
Yes, this result is not enough. Gaps exist at the boundaries of shapes.
Therefore some ingenuity is required... but I post this because it might be some hint for you.
Background: I'm trying to solve this issue in iTerm2: https://gitlab.com/gnachman/iterm2/issues/6703#note_71933498
iTerm2 uses the default pixel format of MTLPixelFormatBGRA8Unorm for all its textures, including the MTKView's drawable. On my machine, if my texture function returns a color C then the Digital Color Meter app in "Display in sRGB" will show that the onscreen color is C. I don't know why this works! It should convert the raw color values from my texture function to sRGB, causing them to change, right? For example, here's what I see when my fragment function returns (0, 0.16, 0.20, 1) for the window's background color:
That's weird, but I could live with it until someone complained that it didn't work on their machine (see the issue linked to above). For them, the value my fragment function returns goes through the Generic -> sRGb conversion and looks wrong. It's a hackintosh, so it could be a driver bug or something, but it prompted me to look into this more.
If I change the pixel format everywhere to MTLPixelFormatBGRA8Unorm_sRGB then I get colors I can't explain:
The fragment function remains unchanged, returning (0, 0.16, 0.2, 1). It's rendering to a texture I created with the MTLPixelFormatBGRA8Unorm_sRGB pixel format (not the drawable, if it matters, although the drawable has the same pixel format) and I can see in the GPU debugger that this texture is assigned the too-bright values you see above. In this screenshot the digital color meter is showing the color of the "Intermediate Texture" thumbnail:
Here's the fragment function (modified for debugging purposes):
fragment float4
iTermBackgroundColorFragmentShader(iTermBackgroundColorVertexFunctionOutput in [[stage_in]]) {
return float4(0, 0.16, 0.20, 1);
}
I also tried hacking Apple's sample app (MetalTexturedMesh) to use the sRGB pixel format and return the same color from its fragment shader, and I got the same result. Here is the change I made to it. I can only conclude that I don't understand how Metal defines pixel formats, and I can't find any reasonable interpretation that would give the behavior that I see.
The raw bytes are 7C 6E 00 FF in BGRA. That's certainly not what digital color meter reported, but it's also super far from what I expect.
That is what the color meter reported. That byte sequence corresponds to (0, 0.43, 0.49).
I would have expected 33 29 00 ff. Since the texture function is hardcoded to return float4(0, 0.16, 0.20, 1); and the pixel format is sRGB, I'd expect 0.2 in the blue channel to produce 0.2*255=51 (decimal) = 0x33, not 0x7c.
Your expectation is incorrect. Within shaders, colors are always linear, never sRGB. If a texture is sRGB, then reading from it converts from sRGB to linear and writing to it converts from linear to sRGB.
Given the linear (0, 0.16, 0.2) in your shader, the conversion to sRGB described in section 7.7.7 of the Metal Shading Language spec(PDF) would produce (0, 0.44, 0.48) for floating-point and (0, 111, 124) a.k.a. (0, 0x6f, 0x7c) for RGBA8Unorm. That's very close to what you're getting. So, the result seems correct for sRGB.
The conversion algorithm from linear to sRGB as given in the spec is:
if (isnan(c)) c = 0.0;
if (c > 1.0)
c = 1.0;
else if (c < 0.0)
c = 0.0;
else if (c < 0.0031308)
c = 12.92 * c;
else
c = 1.055 * powr(c, 1.0/2.4) - 0.055;
The conversion algorithm from sRGB to linear is:
if (c <= 0.04045)
result = c / 12.92;
else
result = powr((c + 0.055) / 1.055, 2.4);
I can also see that it's clearly much brighter than the same color drawn with without metal (e.g., in -drawRect:).
How did you construct the color to draw for that case? Note that the generic (a.k.a. calibrated) RGB color space is not linear. It has a gamma of ~1.8. Core Graphics has kCGColorSpaceGenericRGBLinear which is linear.
The mystery is why you see a much darker color when you use a non-sRGB texture. The hard-coded color in your shader should always represent the same color. The pixel format of the texture shouldn't affect how it ultimately shows up (within rounding error). That is, it's converted from linear to the texture's pixel format and, on display, from the texture's pixel format to the display color profile. That's transitive, so the result should be the same as going directly from linear to the display color profile.
Are you sure you aren't pulling the bytes from that texture and then interpreting them as though they were sRGB? Or maybe using -newTextureViewWithPixelFormat:...?
I have this Python code:
cv2.addWeighted(src1, 4, cv2.GaussianBlur(src1, (0, 0), 10), src2, -4, 128)
How can I convert it to Matlab? So far I got this:
f = imread0('X.jpg');
g = imfilter(f, fspecial('gaussian',[size(f,1),size(f,2)],10));
alpha = 4;
beta = -4;
f1 = f*alpha+g*beta+128;
I want to subtract local mean color image.
Input image:
Blending output from OpenCV:
The documentation for cv2.addWeighted has the definition such that:
cv2.addWeighted(src1, alpha, src2, beta, gamma[, dst[, dtype]]) → dst
Also, the operations performed on the output image is such that:
(source: opencv.org)
Therefore, what your code is doing is exactly correct... at least for cv2.addWeighted. You take alpha, multiply this by the first image, then beta, multiply this by the second image, then add gamma on top of this. The only intricacy left to deal with is saturate, which means that any values that are beyond the dynamic range of the data type you are dealing with, you cap it at that much. Because there is a potential for negatives to occur in the result, the saturate option simply means to make any values that are negative 0 and any values that are greater than the maximum expected to that max. In this case, you'll want to make any values larger than 1 equal to 1. As such, it'll be a good idea to convert your image to double through im2double because you want to allow the addition and subtraction of values beyond the dynamic range to happen first, then you saturate after. By using the default image precision of the image (which is uint8), the saturation will happen even before the saturate operation occurs, and that'll give you the wrong results. Because you're doing this double conversion, you'll want to convert the addition of 128 for your gamma to 0.5 to compensate.
Now, the only slight problem is your Gaussian Blur. Looking at the documentation, by doing cv2.GaussianBlur(src1, (0, 0), 10), you are telling OpenCV to infer on the mask size while the standard deviation is 10. MATLAB does not infer the size of the mask for you, so you need to do this yourself. A common practice is to simply find six-times the standard deviation, take the floor and add 1. This is for both the horizontal and vertical dimensions of the mask. You can see my post here on the justification as to why this is common practice: By which measures should I set the size of my Gaussian filter in MATLAB?
Therefore, in MATLAB, you would do this with your Gaussian blur instead. BTW, it's simply imread, not imread0:
f = im2double(imread('http://i.stack.imgur.com/kl3Md.jpg')); %// Change - Reading image directly from StackOverflow
sigma = 10; %// Change
sz = 1 + floor(6*sigma); %// Change
g = imfilter(f, fspecial('gaussian', sz, sigma)); %// Change
%// Rest of the code is the same
alpha = 4;
beta = -4;
f1 = f*alpha+g*beta+0.5; %// Change
%// Saturate
f1(f1 > 1) = 1;
f1(f1 < 0) = 0;
I get this image:
Take a note that there is a slight difference in the way this appears between OpenCV in MATLAB... especially the hallowing around the eye. This is because OpenCV does something different when inferring the mask size for the Gaussian blur. This I'm not sure what is going on, but how I specified the mask size by looking at the standard deviation is one of the most common heuristics for it. Play around with the standard deviation until you get something you like.
i am mechanical engineering student working on a project to automatically detect the weld seam (The seam is a edge that is to be welded) present in a workshop. This gives a basic terminology involved in welding (http://i.imgur.com/Hfwjq0w.jpg).
To separate the weldment from the other objects, i have taken the background image and subtracted the foreground image having the weldment to obatin only the weldment(http://i.imgur.com/v7yBWs1.jpg). After image subtraction,there are the shadow ,glare and remnant noises of subtracted background are still present.
As i want to automatically identify only the weld seam without the outer boundary of weldment, i have tried to detect the edges in the weldment image using canny algorithm and tried to eliminate the isolated noises using the function bwareopen.I have somehow obtained the approximate boundary of weldment and weld seam. The threshold i have used are purely on trial and error approach as dont know a way to automatically set a threshold to detect them.
The problem now i am facing is that i cant specify an definite threshold as this algorithm should be able to identify the seam of any material regardless of its surface texture,glare and shadow present there. I need some assistance to remove the glare,shadow and isolated points from the background subtracted image.
Also i need help to get rid of the outer boundary and obtain only smooth weld seam from starting point to end point.
i have tried to use the following code:
a=imread('imageofworkpiece.jpg'); %http://i.imgur.com/3ngu235.jpg
b=imread('background.jpg'); %http://i.imgur.com/DrF6wC2.jpg
Ip = imsubtract(b,a);
imshow(Ip) % weldment separated %http://i.imgur.com/v7yBWs1.jpg
BW = rgb2gray(Ip);
c=edge(BW,'canny',0.05); % by trial and error
figure;imshow(c) % %http://i.imgur.com/1UQ8E3D.jpg
bw = bwareaopen(c, 100); % by trial and error
figure;imshow(bw) %http://i.imgur.com/Gnjy2aS.jpg
Can anybody please suggest me a adaptive way to set a threhold and remove the outer boundary to detect only the seam? Thank you
Well this doesn't solve your problem of finding an automatic thresholding algorithm. but I can help with isolation the seam. The seam is along the y axis (will this always be the case?) so I used hough transform to isolate only near vertical lines. Normally it finds all lines but I restricted the theta search parameter. The code I'm using now happens to highlight the longest line segment (I got it directly from the matlab website) and it is coincidentally the weld seam. This was purely coincidental. But using your bwareaopened image as input the hough line detector is able to find the seam. Of course it required a bit of playing around to work, so you are stuck at your original problem of finding optimal settings somehow
Maybe this can be a springboard for someone else
a=imread('weldment.jpg'); %http://i.imgur.com/3ngu235.jpg
b=imread('weld_bg.jpg'); %http://i.imgur.com/DrF6wC2.jpg
Ip = imsubtract(b,a);
imshow(Ip) % weldment separated %http://i.imgur.com/v7yBWs1.jpg
BW = rgb2gray(Ip);
c=edge(BW,'canny',0.05); % by trial and error
bw = bwareaopen(c, 100); % by trial and error
figure(1);imshow(c) ;title('canny') % %http://i.imgur.com/1UQ8E3D.jpg
figure(2);imshow(bw);title('bw area open') %http://i.imgur.com/Gnjy2aS.jpg
[H,T,R] = hough(bw,'RhoResolution',1,'Theta',-15:5:15);
figure(3)
imshow(H,[],'XData',T,'YData',R,...
'InitialMagnification','fit');
xlabel('\theta'), ylabel('\rho');
axis on, axis normal, hold on;
P = houghpeaks(H,5,'threshold',ceil(0.5*max(H(:))));
x = T(P(:,2)); y = R(P(:,1));
plot(x,y,'s','color','white');
% Find lines and plot them
lines = houghlines(BW,T,R,P,'FillGap',2,'MinLength',30);
figure(4), imshow(BW), hold on
max_len = 0;
for k = 1:length(lines)
xy = [lines(k).point1; lines(k).point2];
plot(xy(:,1),xy(:,2),'LineWidth',2,'Color','green');
% Plot beginnings and ends of lines
plot(xy(1,1),xy(1,2),'x','LineWidth',2,'Color','yellow');
plot(xy(2,1),xy(2,2),'x','LineWidth',2,'Color','red');
% Determine the endpoints of the longest line segment
len = norm(lines(k).point1 - lines(k).point2);
if ( len > max_len)
max_len = len;
xy_long = xy;
end
end
% highlight the longest line segment
plot(xy_long(:,1),xy_long(:,2),'LineWidth',2,'Color','blue');
from your image it looks like the weld seam will be usually very dark with sharp intensity edge so why don't you use that ?
do not use background
create derivation image
dx[y][x]=pixel[y][x]-pixel[y][x-1]
do this for whole image (if on place then x must decrease in loop!!!)
filter out all derivations lower then thresholds
if (|dx[y][x]|<threshold) dx[y][x]=0; else pixel[y][x]=255;` // or what ever values you use
how to obtain threshold value ?
compute min and max intensity and set threshold as (max-min)*scale where scale is value lower then 1.0 (start with 0.02 or 0.1 for example ...
do this also for y axis
so compute dy[][]... and combine dx[][] and dy[][] together. Either with OR or by AND logical functions
filter out artifacts
you can use morphologic filters or smooth threshold for this. After all this you will have mask of pixels of weld seam
if you need boundig box then just loop through all pixels and remember min,max x,y coords ...
[Notes]
if your images will have good lighting then you can ignore the derivation and threshold the intensity directly with something like:
threshold = 0.5*(average_intensity+lowest_intensity)
if you want really fully automate this then you have to use adaptive thresholds. So try more thresholds in a loop and remember result closest to desired output based on geometry size,position etc ...
[edit1] finally have some time/mood for this so
Intensity image threshold
you provided just single image which is far from enough to make reliable algorithm. This is the result
as you can see without further processing this is not good approach
Derivation image threshold
threshold derivation by x (10%)
threshold derivation by y (5%)
AND combination of both 10% di/dx and 1.5% di/dy
The code in C++ looks like this (sorry do not use Matlab):
int x,y,i,i0,i1,tr2,tr3;
pic1=pic0; // copy input image pic0 to pic1
pic2=pic0; // copy input image pic0 to pic2 (just to resize to desired size for derivation)
pic3=pic0; // copy input image pic0 to pic3 (just to resize to desired size for derivation)
pic1.rgb2i(); // RGB -> grayscale
// abs derivate by x
for (y=pic1.ys-1;y>0;y--)
for (x=pic1.xs-1;x>0;x--)
{
i0=pic1.p[y][x ].dd;
i1=pic1.p[y][x-1].dd;
i=i0-i1; if (i<0) i=-i;
pic2.p[y][x].dd=i;
}
// compute min,max derivation
i0=pic2.p[1][1].dd; i1=i0;
for (y=1;y<pic1.ys;y++)
for (x=1;x<pic1.xs;x++)
{
i=pic2.p[y][x].dd;
if (i0>i) i0=i;
if (i1<i) i1=i;
}
tr2=i0+((i1-i0)*100/1000);
// abs derivate by y
for (y=pic1.ys-1;y>0;y--)
for (x=pic1.xs-1;x>0;x--)
{
i0=pic1.p[y ][x].dd;
i1=pic1.p[y-1][x].dd;
i=i0-i1; if (i<0) i=-i;
pic3.p[y][x].dd=i;
}
// compute min,max derivation
i0=pic3.p[1][1].dd; i1=i0;
for (y=1;y<pic1.ys;y++)
for (x=1;x<pic1.xs;x++)
{
i=pic3.p[y][x].dd;
if (i0>i) i0=i;
if (i1<i) i1=i;
}
tr3=i0+((i1-i0)*15/1000);
// threshold the derivation images and combine them
for (y=1;y<pic1.ys;y++)
for (x=1;x<pic1.xs;x++)
{
// copy original (pic0) pixel for non thresholded areas the rest fill with green color
if ((pic2.p[y][x].dd>=tr2)&&(pic3.p[y][x].dd>=tr3)) i=0x00FF00;
else i=pic0.p[y][x].dd;
pic1.p[y][x].dd=i;
}
pic0 is input image
pic1 is output image
pic2,pic3 are just temporary storage for derivations
pic?.xy,pic?.ys is the size of pic?
pic.p[y][x].dd is pixel axes (dd means access pixel as DWORD ...)
as you can see there is a lot of stuff around (nod visible in the first image you provided) so you need to process this further
segmentate and separate...,
use hough transform ...
filter out small artifacts ...
identify object by expected geometry properties (aspect ratio,position,size)
Adaptive thresholds:
you need for this to know the desired output image properties (not possible to reliably deduce from single image input) then create function that do the above processing with variable tr2,tr3. Try in loop more options of tr2,tr3 (loop through all values or iterate to better results and remember the best output (so you also need some function that detects the quality of output) for example:
quality=0.0; param=0.0;
for (a=0.2;a<=0.8;a+=0.1)
{
pic1=process_image(pic0,a);
q=detect_quality(pic1);
if (q>quality) { quality=q; param=a; pico=pic1; }
}
after this the pic1 should hold the relatively best threshold image ... You should process like this all threshold separately inside the process_image the targeted threshold must be scaled by a for example tr2=i0+((i1-i0)*a);
I'm currently running code on the CPU that sums columns and rows of an grey scale NSImage (i.e. only 1 samplePerPixel). I thought I would try to move the code to the GPU (if possible). I found a CoreImage filters CIRowAverage and CIColumnAverage which seem similar.
In the Apple Docs on writing custom Core Image filters they state,
Keep in mind that your code can’t accumulate knowledge from pixel to pixel. A good strategy when writing your code is to move as much invariant calculation as possible from the actual kernel and place it in the Objective-C portion of the filter.
This hints that maybe one cannot make a summation of pixels using a filter kernel. If so, how do the above function manage to get an average of a region?
So my question is, what the best way to implemented summing row or columns of a image to get the total value of the pixels. Should I stick to the CPU?
The Core Image filters perform this averaging through a series of reductions. A former engineer on the team describes how this was done for the CIAreaAverage filter within this GPU Gems chapter (under section 26.2.2 "Finding the Centroid").
I talk about a similar averaging by reduction in my answer here. I needed this capability on iOS, so I wrote a fragment shader that reduced the image by a factor of four in both horizontal and vertical dimensions, sampling between pixels in order to average sixteen pixels into one at each step. Once the image was reduced to a small enough size, the remaining pixels were read out and averaged to produce a single final value.
This kind of reduction is still very fast to perform on the GPU, and I was able to extract an average color from a 640x480 video frame in ~6 ms on an iPhone 4. You'll of course have a lot more horsepower to play with on a Mac.
You could take a similar approach to this by reducing in only one direction or the other at each step. If you are interested in obtaining a sum of the pixel values, you'll need to watch out for precision limits in the pixel formats used on the GPU. By default, RGBA color values are stored as 8-bit values, but OpenGL (ES) extensions on certain GPUs can give you the ability to render into 16-bit or even 32-bit floating point textures, which extends your dynamic range. I'm not sure, but I believe that Core Image lets you use 32-bit float components on the Mac.
FYI on the CIAreaAverage filter—it's coded like this:
CGRect inputExtent = [self.inputImage extent];
CIVector *extent = [CIVector vectorWithX:inputExtent.origin.x
Y:inputExtent.origin.y
Z:inputExtent.size.width
W:inputExtent.size.height];
CIImage* inputAverage = [CIFilter filterWithName:#"CIAreaAverage" keysAndValues:#"inputImage", self.inputImage, #"inputExtent", extent, nil].outputImage;
//CIImage* inputAverage = [self.inputImage imageByApplyingFilter:#"CIAreaMinimum" withInputParameters:#{#"inputImage" : inputImage, #"inputExtent" : extent}];
EAGLContext *myEAGLContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
NSDictionary *options = #{ kCIContextWorkingColorSpace : [NSNull null] };
CIContext *myContext = [CIContext contextWithEAGLContext:myEAGLContext options:options];
size_t rowBytes = 32 ; // ARGB has 4 components
uint8_t byteBuffer[rowBytes]; // Buffer to render into
[myContext render:inputAverage toBitmap:byteBuffer rowBytes:rowBytes bounds:[inputAverage extent] format:kCIFormatRGBA8 colorSpace:nil];
const uint8_t* pixel = &byteBuffer[0];
float red = pixel[0] / 255.0;
float green = pixel[1] / 255.0;
float blue = pixel[2] / 255.0;
NSLog(#"%f, %f, %f\n", red, green, blue);
Your output should look something like this:
2015-05-23 15:58:20.935 CIFunHouse[2400:489913] 0.752941, 0.858824, 0.890196
2015-05-23 15:58:20.981 CIFunHouse[2400:489913] 0.752941, 0.858824, 0.890196