Who stores the image in zeros and ones - image

I wanted to know who stores the images/ videos or audios in zeros and ones. I know that an image is stored in form of zeros and ones by storing the color for each pixel in the form of zeros and ones, and similar things happen for other types of data. But my question is, for example, if I create an image using any Image creating application and store it in my computer, then what or who is storing the colors in binary form for each pixel?

There are two types of images
acquired from a device (camera, scanner), which measures the amount of light in the RGB channels for every pixel and converts it to a binary value; writing the values to memory is handled by the device driver.
synthetized, from a geometric model with surface and lighting characteristics by pure computation, and every pixel value is obtained "out of nothing"; this is done by a rendering program.
After the image has been written to RAM memory, it can be transferred to disk for long term storage.

Related

The relationship of transformation matrix in image registration and image scale

I am currently doing image registration using the 'Registration estimator' application.
Basically, the application allows the user to register two images using multiple methods and the output includes transformation matrix.
The question is, now I want to register two large images, the size of the two images are 63744*36064 and 64704*35072. It's almost impossible to directly register two images since they are too large.
The methods I use is to first obtain the scaled images for registration and derive transformation matrix and apply that matrix to the original images.
However, I found that even for the same image, different transformation matrices are obtained at different levels.
For example, the transformation matrix for images at sizes: 3984(63744/16)*2254(36064/16) and 4022*2192 is different from 1992*1127 (1/32) and 2022*1096 (1/32).
In that case, I am confused about the relationship between sizes and the transformation matrix. Could anyone give me a hint so that I can precisely register two original images based on the transformation matrix I have for the images at a lower level (smaller size)?
Downsampling an image has direct effect on translation matrix. Suppose for example that there is 2 pixel translation in x direction, downsapling by a factor of 2 changes it to 1 pixel. Whereas its easy to compensate this effect for registering original images, you should avoid downsamplind images if there's memory constrain, since you may lose invaluable key-points used for robust registration. Instead, you can slice your images up into several sub-images, extract the features in each sub-image, combine the features and match them.

Is there an efficient image format to display boolean values?

If there is a two dimensional array of booleans, would there be an efficient way to represent this as an image and save this using minimal space? So just one bit for every pixel, having it either be black or white?
so you could put a bitmap header on a buffer and display it this way, you wont save any memory, but you will be able to view it... if you are looking to save space there are lots of lossless encoding techniques... huffman coding and lzw are some methods... some of those methods get grouped into formats like zip, bzip, gzip, deflate, etc
My guess is no, because in an image, the color attribute of each pixel is represented by at least 8 bits. So you in effect are using an 8-bit byte to store a value which can be represented by one solitary bit (0 or 1).
In addition, there are other attributes that describe each pixel in an image, including alpha channel, opacity, and so on.
So, in short, although it may be visually pleasing to use images to store binary data, it would in fact be using way more storage space.
Most programming languages have native support for biinary data, and these provide much more efficient storage for them.

Random access to a huge image?

I'm looking for a way to store a huge image in a way that allows random access to small regions without the need to load the whole image into memory. The focus is on speed and on a low memory footprint.
Let's say the image is 50k*50k 32-bit-pixels, amounting to ~9 GB (Currently it is stored as a single tiff file). One thing I found looks quite promising: HDF5 allows me to store and randomly access (integer) arrays of arbitrary size and dimension. I could put my image's pixels into such a database.
What (other) (simpler?) methods for efficient random pixel access are out there?
I would really like to store the image as a whole and avoid creating tiles for several reasons: The requested image region dimensions are not constant/predictable. I want to only load the pixels into memory that I really need. I would like to avoid the processing effort involved in tiling the image.

How is HDR data stored?

I am wondering what the data structure is behind storing images with HDR data. I understand how regular images (rgba) and cubemaps are stored. I doubt its as simple as storing multiple images at different exposures inside the same file.
You've probably moved on long ago, but I thought it worth posting references for anyone else who happened upon this question.
Here is an old reference for the Radiance .pic (now .hdr) file format. The useful info starts at the bottom of page 29.
http://radsite.lbl.gov/radiance/refer/filefmts.pdf
excerpt:
The basic idea is to store a 1-byte mantissa for each of three
primaries, and a common 1-byte exponent. The accuracy of these values
will be on the order of 1% (+/-1 in 200) over a dynamic range from
10^-38 to 10^38.
And here is a more recent reference for JPEG HDR format: http://www.anyhere.com/gward/papers/cic05.pdf
It's generally a matter of increasing the range of values (in an HSV sense) representable, so you can use e.g. RGB[A] where each element is a 16-bit int, 32-bit int, float, double etc. instead of a JPEG-type-quality 8-bit int. There's a trade-off between increasing the range represented, retaining fine gradations within that range, and whether some particular intensity levels are given priority via some non-linearity in the mapping (e.g. storing a log of the value).
The raw file from the camera normally stores the 12-14bit values from the Bayer mask - so effectively a greeyscale. These are sometimes compressed losslessly (in Canon or Nikon) or as 16bit values (Olympus). The header also contains the white balance and gain calibrations for the red,green,blue masked pixels so you can generate a color image.
Once you have a color image you can store it however you want, normally 16bit RGB is the easiest.
Here is some information on the Radiance file format, used for HDR images. It uses 32-bit floating-point numbers.
First, I am not sure if there is a public format for storing multiple images at different exposures inside cause the usage is rare. Those multiple images are used as one sort of HDR sources, but they are not HDR, they are just normal LDR (L for low) or SDR (S for standard?) images encoded like JPEG from digital cameras.
It is more common to store resulting in HDR format and the point is just like everyone mentioned, in floating point.
There are some HDR formats:
OpenEXR
TIF
Radiance
...
You can get more info from wiki

Mysql Algorithm for Determining Closest Colour Match

I'm attempting to create a true mosaic application. At the moment I have one mosaic image, ie the one the mosaic is based on and about 4000 images from my iPhoto library that act as the image library. I have already done my research and analysed the mosaic image. I've converted it into 64x64 slices each of 8 pixels. I've calculated the average colour for each slice and assertain the r, g, b and brightness (Luminance (perceived option 1) = (0.299*R + 0.587*G + 0.114*B)) value. I have done the same for each of the image library photos.
The mosaic slices table looks like so.
slice_id, slice_image_id, slice_slice_id, slice_image_column, slice_image_row, slice_colour_hex, slice_rgb_red, slice_rgb_blue, slice_rgb_green, slice_rgb_brightness
The image library table looks like so.
upload_id, upload_file, upload_colour_hex, upload_rgb_red, upload_rgb_green, upload_rgb_blue, upload_rgb_brightness
So basically I'm reading the image slices from the slices table into PHP and then pulling out the appropriate images from the library table based on the colour hexs. My trouble is that I've been on this too long and probably had too many energy drinks so am not concentrating properly, I can't figure out the way to pick out the nearest colour neighbor if the appropriate hex code doesn't exist.
Any ideas on the perfect query?
NB: I know pulling out the slices one by one is not ideal however the mosaic is only rebuilt periodically so a sudden burst in the mysql load doesn't really bother me, however if there us a way to pull the images out all at once that would also be a massive bonus.
Update Brightness Comparisons.
With Brightness
(source: buggedcom.co.uk)
Without Brightness
(source: buggedcom.co.uk)
One way to minimize the difference between the colours (in terms of their RGB components) is you would individually minimize the difference in each component. Thus you're looking for the entry with lowest
(targetRed - rowRed)^2 + (targetGreen - rowGreen)^2 + (targetBlue - rowBlue)^2
I think that you may be better off using HSL instead of RGB as color space. Formulas to compute HSL from RGB are available on the internet (and in the linked Wikipedia article), they may give you what you need to compute the best match.

Resources