Read RGBA values of pixel from surface/texture with SDL2? - pascal

How do I read the RGBA values of a specific pixel, at given coordinates x and y, from a surface or texture with SDL2 in Free Pascal?

var
spriteSheetTexture: PSDL_texture;
pixel: ^UInt8;
pixelCnt: byte;
SDLRect: TSDL_rect;
spriteSheetSurface:=SDL_loadBMP(PChar('spr4\sprite sheets\spr'+fVal(spriteSheetNum)+'.spr'));
SDLRect.x:=0; SDLRect.y:=0; SDLRect.w:=1; SDLRect.h:=1;
SDL_renderReadPixels(SDLRenderer, #SDLRect, 0, pixel, spriteSheetSurface^.pitch);
for pixelCnt:=0 to 3 do
writeLn(pixel[pixelCnt]);
Actually it doesn't seem to work. Pixel returns zero for every index, no matter what pixel I read.
I was right when I said SDL_RenderReadPixels involves a texure. The SDL 2 documentation alludes to it: "Use this function to read pixels from the current rendering target". The current rendering target being either a texture or SDL screen (https://wiki.libsdl.org/SDL_RenderReadPixels).

Related

Transforming raw pixel using rescale slope and rescale intercept in DIcom

I used the solution in this post Window width and center calculation of Dicom Image to transform the raw pixel, it works good most of the images, but i faced problem with some images. That images having pixel value "24", rescale slope "1.0" and rescale intercept "-1024".
When i applied the solution mentioned above am get the new pixel value in negative(-1000).
I can't find the value for this new pixel value in Lookup table created by using window level and window width because look up table having only positive values (0 to 65536). Please help me solve this problem.
You are probably dealing with CT images. RescaleIntercept tag for CTs usually set to -1024. Negative -1000 value you obtain makes perfect sense, it corresponds to air in Hounsfield units (as Anders said). Now if you want to visualize the image, you have to apply a transfer function that will map HU scale to RGB for instance.

Pixel movement C++

This may or may not be a very stupid question so I do apologise, but I haven't come across this in any books or tutorials as yet. Also I guess it can apply to any language...
Assume you create a window of size: 640x480 and an object/shape inside it of size 32x32 and you're able to move the shape around the window with keyboard inputs.
Does it matter what Type (int, float...) you use to control the movement of the shape. Obviously you can not draw halfway through a pixel, but if you move the shape by 0.1f (for example with a glTranslation function) what happens as supposed to moving it by an int of 1... Does it move the rendered shape by 1/10 of a pixel or?
I hope I've explained that well enough not to be laughed at.
I only ask this because it can affect the precision of collision detection and other functions of a program or potential game.
glTranslate produces a translation by x y z . The current matrix (glMatrixMode) is multiplied by this translation matrix, with the product replacing the current matrix, as if glMultMatrix were called with the following matrix for its argument:
1 0 0 x 0 1 0 y 0 0 1 z 0 0 0 1
If the matrix mode is either GL_MODELVIEW or GL_PROJECTION, all objects drawn after a call to glTranslate are translated.
Use glPushMatrix and glPopMatrix to save and restore the untranslated coordinate system.
This meaning that glTranslate will give you a translation, to use with the current matrix, resulting in non decimal numbers. You can not use half a pixel. glTranslate receives either doubles or floats, so if you are supposed to move it 1 in x,y or z, just give the function a float 1 or double 1 as an argument.
http://www.opengl.org/sdk/docs/man2/xhtml/glTranslate.xml
The most important reason for using floats or doubles to represent positioning is the background calculation. If u keep calculating your position with ints not only do you have to probably use conversion steps to get back to ints. You will also lose data every x amount of steps
if you want to animate you sprite to have anything less than 1 pixel movement per update then YES you need to use floating point, otherwise you will get no movement. your drawing function would most likely round to the nearest integer so it's probably not relevant for that. however you can of course draw to sub pixel accuracy!

Matlab - How to obtain values of pixels?

If I have an image, how can I obtain the values of each pixel in that image using matlab
Thanks.
Images are matrices (2D if grayscale, 3D if colored) in MATLAB.
You can use x(i,j) to access a pixel at location (i,j) in a grayscale image.
If the image is colored, you can use x(i,j,:) to access the r, g, b values in a 3-vector, respectively. If you need individual channels, then, you can use x(i,j,1) for red for example.
You may read this page to learn more.
You can use reshape to extract all the pixel values of the image into a vector with pixel values:
frame = imread('picture.jpg');
frame_size = size(frame);
allpixels = reshape(frame, frame_size(1)*frame_size(2), frame_size(3))
This can be useful when you want to vectorize your Matlab code (to avoid a for loop that goes through every pixel). To get back the original image representation:
frame2 = reshape(allpixels, frame_size);
to get the values at pixel(1,1) we simply write image(1,1).

Kinect: Converting from RGB Coordinates to Depth Coordinates

I am using the Windows Kinect SDK to obtain depth and RGB images from the sensor.
Since the depth image and the RGB images do not align, I would like to find a way of converting the coordinates of the RGB image to that of the depth image, since I want to use an image mask on the depth image I have obtained from some processing on the RGB image.
There is already a method for converting depth coordinates to the color space coordinates:
NuiImageGetColorPixelCoordinatesFromDepthPixel
unfortunately, the reverse does not exist. There is only an arcane call in INUICoordinateMapper:
HRESULT MapColorFrameToDepthFrame(
NUI_IMAGE_RESOLUTION eColorResolution,
NUI_IMAGE_RESOLUTION eDepthResolution,
DWORD cDepthPixels,
NUI_DEPTH_IMAGE_PIXEL *pDepthPixels,
DWORD cDepthPoints,
NUI_DEPTH_IMAGE_POINT *pDepthPoints
)
How this method works is not very well documented. Has anyone used it before?
I'm on the verge of performing a manual calibration myself to calculate a transformation matrix, so I would be very happy for a solution.
Thanks to commenter horristic, I got a link to msdn with some useful information (thanks also to T. Chen over at msdn for helping out with the API). Extracted from T. Chen's post, here's the code that will perform the mapping from RGB to depth coordinate space:
INuiCoordinateMapper* pMapper;
mNuiSensor->NuiGetCoordinateMapper(&pMapper);
pMapper->MapColorFrameToDepthFrame(
NUI_IMAGE_TYPE_COLOR,
NUI_IMAGE_RESOLUTION_640x480,
NUI_IMAGE_RESOLUTION_640x480,
640 * 480,
(NUI_DEPTH_IMAGE_PIXEL*)LockedRect.pBits,
640 * 480,
depthPoints);
Note: the sensor needs to be initialized and a depth frame locked for this to work.
The transformed coordinates can, e.g., be queried as follows:
/// transform RGB coordinate point to a depth coordinate point
cv::Point TransformRGBtoDepthCoords(cv::Point rgb_coords, NUI_DEPTH_IMAGE_POINT * depthPoints)
{
long index = rgb_coords.y * 640 + rgb_coords.x;
NUI_DEPTH_IMAGE_POINT depthPointAtIndex = depthPoints[index];
return cv::Point(depthPointAtIndex.x, depthPointAtIndex.y);
}
As far as I can tell, MapColorFrameToDepthFrame effectively runs the co-ordinate system conversion on every pixel of your RGB image, storing the depth image coordinates resulting from the conversion and the resultant depth value in the output NUI_DEPTH_IMAGE_POINT array. The definition of that structure is here: http://msdn.microsoft.com/en-us/library/nuiimagecamera.nui_depth_image_point.aspx
Possibly this is overkill for your needs however, and I've no idea how fast that method is. XBOX Kinect developers have a very fast implementation of that function that runs on the GPU at frame rate, Windows developers might not be quite so lucky!

Accesing the pixel value dor selected ROA in openCV

I am working on Image Inpaiting on video project and i am selecting the portion of the image on a screen(ROI) as a rectangle and i am inpaiting that portion of the image. Now i want to save five previous frame from live video(that i can do it) and save the pixels value of that ROI of five frame/image on five different array. I will use that array to generate the background and remove the foreground object.
Any one know how i can save pixel value in array for that selected area?
Thanks in advance.
C++ interface of opencv use cv::Mat for storing image pixel.
The following code shows you how to declare a matrix B "pointing" to a ROI of matrix A.
Matrices are images. ROI is a rectangle (x=0;t=0;width=0;height=100). Use opencv highgui function to save your image.
cv::Mat A(640,480,CV_8C3);
cv::Rect rect(0,0,100,100);
cv::Mat B = A(rect);
cv::imwrite("my_roi.pbg",B);
If you need to read frames from a video, use cv::VideoCapture cap and cap >> frame to grab and retrieve each images as cv::Mat. If you want to go to different position in your video file use cv::VideoCapture::set(..,CV_FRAME_MSEC) , read manual.

Resources