QGIS How do I set the spatial resolution of a raster? - pixel

I have some files that started out as .hdf files and I have converted them to TIFFs after some destriping. These images have no spatial reference and have been defaulted to 1 pixel = 1 degree. This comes out to 111,319.419 meters per pixel. Can I change the pixel size to something more manageable.
This imagery is non-georeferenced hyperspectral imagery. I can calculate the true pixel size given the elevation and sensor field of view (which I have).

Related

Get real RAW values from pixels using OpenCV or any other software

In my understanding of spec sheets from digital cameras, each output color pixel is made out 4 real pixels on the CCD. However, when reading from a camera, for example using OpenCV, one can get as much as NxMx3 pixels. The two green pixels get averaged.
For what I understood, OpenCV let's you transform from RGB images to grayscale, but couldn't find a way of getting the raw values from the CCD. Of course, it could be that there is a lower level limitation (i.e. the transformation to color space happens on the electronics and not on the computer). Or that there is some interpolation and hence, there are in reality NxM pixels and not NxMx4 pixels in a camera.
Is there any way of getting RAW data from a camera with OpenCV, or is there any information stored in RAW files acquired with commercial cameras?

The fastest algorithm for splitting image into footprint and nodata values

I have a random satellite image that can be divided into 2 classes:
1) no data values (all pixel values are equal and randomly vary from image to image)
2) footprint (all pixel values are random)
A sum of all the values of no data and footprint gives a bounding box.
What is the fastest algorithm for dividing a random satellite image into these 2 classes?
UPDATE:
Are no data value-areas always at the border of the image?
No data value could not be inside of the footprint and it may be absent.
Are no data-values always black?
No, it's value may vary from picture to picture. But always equal each other inside one image.
Does this no data value-color appear within the footprint?
Most of the images are grayscale and may be in 16, 8-bit data formats. But i need general algorithm. Case specific algorithm is not what i want.
UPDATE 2:
My current approach is:
1) Take every pixel values that lay on the bounding box boarder
2) Take most frequent value and set it as nodata
3) Reclassify image into 2 classes with values: NoData value - nodata class,
1 - footprint class
4) Convert rasters pixels with value 1 into vector format
For big images it take more than 5 minutes to get vector boarders of footprint.
A simple way for you would be to multiply the pixel intensities. From the images you uploaded, the no data values are esentially of 0 intensity. Instead of going for complex methods, simply multiply the image intensities by 1000.
I used OpenCV and could segment out the regions in under 4 lines of code.
Here's an example -

Concept of Image pixels, resolution and magnification of image

I am trying to understand the relation between image pixels, their size, resolution and how changing the resolutions of an image results in the same image size but slight amount of blurriness. I referred to this links but some doubts still remain:
1) So, what is the "commonly" accepted definition of "resolution"?
2) How(and Why) does changing the resolution of an image (say, a Desktop wallpaper) result in the same image size but a slight amount of blurriness? In this case, what definition of "resolution", "pixels" and "image size" are we talking about?
Thanks in advance!
The image that is displayed on the screen is composed of thousands (or millions) of small dots; these are called pixels.
The number of pixels that can be displayed on the screen is referred to as the resolution of the image.
Image resolution is the detail an image holds.
As the resolution goes up, the image becomes more clear. It becomes
sharper, more defined, and more detailed as well.
In addition to image size, the quality of the image can also be manipulated. Here we use the word "compression". An uncompressed image is saved in a file format that doesn't compress the pixels in the image at all. Formats such as BMP or TIF files do not compress the image.
If you want to reduce the "file size" (number of megabytes required to save the image), you can choose to store your image as a JPG file and choose the amount of compression you want before saving the image.
These terms are explained in a video
Put in simple words, resolution is simply the number of pixels(picture cells) contained in a horizontal/vertical axis. The more the pixels arranged in an axis, better the image. The formal definition can be easily availed on the web.
Changing the resolution simply means changing the number of pixels in the image.Higher resolution implies more pixels hence a more detailed image. Reducing the resolution of an image decreases the image size following either lossy or lossless compression algorithms. This further reduces the amount of information in the image leading to the blurriness or the jagged edges.Image size varies according to the degree of compression only. 'Pixels' and 'Resolution' remain what they had been explained.

How to post process a thresholded binary image to extract the exact no. of pixels

I have a medical binary image that I got after manually thresholding a grayscale medical image.during thresholding , I noticed some overlapped regions in histogram that contained pixels which could be of any type( either glandulat tissue type- I considered these in above threshold range , or fat tissue-I considered these in below threshold range)
How can I post process the binary image to get exact no. of pixels of glandular tissue only, discarding the effect of wrongly thresholded pixels in overlapped region? please help
There is no difficulty maintaining one counter per pixel type and incrementing every time you classify a pixel.
If you have more than two classes, obviously a binary image cannot store this information and you need to count before binarization.

Transforming raw pixel using rescale slope and rescale intercept in DIcom

I used the solution in this post Window width and center calculation of Dicom Image to transform the raw pixel, it works good most of the images, but i faced problem with some images. That images having pixel value "24", rescale slope "1.0" and rescale intercept "-1024".
When i applied the solution mentioned above am get the new pixel value in negative(-1000).
I can't find the value for this new pixel value in Lookup table created by using window level and window width because look up table having only positive values (0 to 65536). Please help me solve this problem.
You are probably dealing with CT images. RescaleIntercept tag for CTs usually set to -1024. Negative -1000 value you obtain makes perfect sense, it corresponds to air in Hounsfield units (as Anders said). Now if you want to visualize the image, you have to apply a transfer function that will map HU scale to RGB for instance.

Resources