uncompress wavelet compressed image - image

I have an wavelet compressed image, but not sure what parameter its using for compression, is there a way to un-compress this image?. I tried using a jpeg-2000 image viewer but it did not help.
As per my understanding one should know the wavelet on which it was compressed to proceed further, but this information is missing at present. Does this mean the images remain encrypted and cant be decoded?

Do you know the data format? if you plot it can you see the main bands and sub bands at the various levels of detail?
Once you have the data format, you can simply try a few possible wavelet shapes, starting with the Haar for simplicity. At least you will get a good impression of the image content.
If you don't know the data format, you are probably stuffed.

Related

Best image file format for book pages

I wanted to scan Book pages and combine the images to an pdf "ebook" (just for me), but the file sizes get really huge. Even .jpg resulted in an pdf file with 60mb+ in size.
Do you have any idea how I can compress it any further? I.e. which file format I could choose for this specific purpose? (The book contains pictures and written text.)
Thank you for your help.
I tried to save it as .jpg and other file formats like .png, but didnt get small enough for the file to be easy handled, without loosing to much resolution.
Images are expensive things.
Ignoring compression you’re looking at 3bytes per pixel of data.
If you want to keep images you could reduce this by turning your images into greyscale. That reduces it to 1byte per pixel (again ignoring compression).
Or you could turn it into black and white. Which would be 1 but per pixel.
Or, alternatively, you could use OCR to translate your image into actual text which is a much more efficient way of storing books.

finding length of object in image

As a part of our project we have to find the dimensions of a given object in a particular image ex- dimensions of a given sunken ship which is underwater. This is totally new to me so my friend told me that in matlab its possible. Kindly help me out
I think you should look into image blobs and Edge detection. That's where I would start
Are you already set on MatLab? If you can use C#, I would look into: AForge.NET Image processing library for C#:
http://www.aforgenet.com/projects/iplab/
I have used AForge before to identify "blobs" in images and perform other image processing operations.
If you have still not finalized on Matlab, then AForge.NET or Magick.NET from ImageMagick can be tried.
To identify the dimensions of the image, we have to think thru the manual process of identifying the same. How are we able to identify ship in water from an image? How is the object different from sorrounding area in the image?
From that, you may try to identify ship as a blob and work on the blob. Sometimes, you may not be able to identify ship as blob, probably due to noise of the sorrounding. Find means to remove that noise or differentiate the object further from sorrounding by errosion or dillation or combination.

Match two images in different formats

I'm working on a software project in which I have to compare a set of 'input' images against another 'source' set of images and find out if there is a match between any of them. The source images cannot be edited/modified in any way; the input images can be scaled/cropped in order to find a match. The images can be in BMP,JPEG,GIF,PNG,TIFF of any dimensions.
A constraint: I'm not allowed to use any external libraries. ImageMagick is an exception and can be used.
I intend to use Java/Python. The software is purely command-line based.
I was reading on SO and some common image comparing algorithms. I'm planning to take 2 approaches.
1. I could use Histograms/buckets to find out the RGB values of the 2 images being compared.
2. Use SIFT/SURF to fin keypoint descriptors and find the euclidean distance between them and output the result based on the resultant distance.
The 2 images in comparison can be in different formats. An intuitive thought is that before analysis/comparison, the 2 images must be converted to a common format.I reasoned that the image should be converted to the one with lesser quality e.g. if the 2 input images are BMP and JPEG, convert the BMP to JPEG. This can be thought of as a pre-processing step.
My question:
Is image conversion to a common format required? Can 2 images of different formats be compared? IF they have to be converted before comparison, is my assumption of comparing from higher quality(BMP) to lower(JPEG) correct? It'd also be helpful if someone can suggest some algorithms for image conversion.
EDIT
A match is said to be found if the pattern image is found in the source image.
Say for example the source image consists of a football field with one player. If the pattern image contains the player EXACTLY as he is in the source image, then its a match.
No, conversion to a common format on disk is not required, and likely not helpful. If you extract feature descriptors from an image (SIFT/SURF, for example), it matters much less how the original images were stored on disk. The feature descriptors should be invariant to small compression artifacts.
A bit more...
Suppose you have a BMP that is an image of object X in your source dataset.
Then, in your input/query dataset, you have another image of object X, but it has been saved as a JPEG.
You have no idea how what noise was introduced in the encoding process that produced either of these images. There is lighting differences, atmospheric effects, lens effects, sensor noise, tone-mapping, gammut-mapping. Some of these vary from image to image, others vary from camera to camera. All this is done before the image even gets saved to storage in the camera. Yes, there are also JPEG compression artifacts, but to assume the BMP is "higher" quality and then degrade it through JPEG compression will not help. Perhaps the BMP has even gone through JPEG compression before being saved as a BMP.

3D image compression

I have a 3D image and need a method by which be able to compress it. The quality for the available methods for 2D compressing is very good. But, I could not find any suitable method for 3D. Anyone can help me about it? I am using MATLAB for my work. Thanks in advance for your help and suggestion.
You can consider that your 3D image is a video (the third dimension is the time).
Then, you can use standard video compression algorithms.
In matlab, you can use the videoWriter class to make compressed video files:
https://www.mathworks.com/help/matlab/ref/videowriterclass.html
If your data is grid based like a 2d image, you will find that it is very easy to adapt the png format to a third dimension.

Self-describing file format for gigapixel images?

In medical imaging, there appears to be two ways of storing huge gigapixel images:
Use lots of JPEG images (either packed into files or individually) and cook up some bizarre index format to describe what goes where. Tack on some metadata in some other format.
Use TIFF's tile and multi-image support to cleanly store the images as a single file, and provide downsampled versions for zooming speed. Then abuse various TIFF tags to store metadata in non-standard ways. Also, store tiles with overlapping boundaries that must be individually translated later.
In both cases, the reader must understand the format well enough to understand how to draw things and read the metadata.
Is there a better way to store these images? Is TIFF (or BigTIFF) still the right format for this? Does XMP solve the problem of metadata?
The main issues are:
Storing images in a way that allows for rapid random access (tiling)
Storing downsampled images for rapid zooming (pyramid)
Handling cases where tiles are overlapping or sparse (scanners often work by moving a camera over a slide in 2D and capturing only where there is something to image)
Storing important metadata, including associated images like a slide's label and thumbnail
Support for lossy storage
What kind of (hopefully non-proprietary) formats do people use to store large aerial photographs or maps? These images have similar properties.
It seems like starting with TIFF or BigTIFF and defining a useful subset of tags + XMP metadata might be the way to go. FITS is no good since it is basically for lossless data and doesn't have a very appropriate metadata mechanism.
The problem with TIFF is that it just allows too much flexibility, but a subset of TIFF should be acceptable.
The solution may very well be http://ome-xml.org/ and http://ome-xml.org/wiki/OmeTiff.
It looks like DICOM now has support:
ftp://medical.nema.org/MEDICAL/Dicom/Final/sup145_ft.pdf
You probably want FITS.
Arbitrary size
1--3 dimensional data
Extensive header
Widely used in astronomy and endorsed by NASA and the IAU
I'm a pathologist (and hobbyist programmer) so virtual slides and digital pathology are a huge interest of mine. You may be interested in the OpenSlide project. They have characterized a number of the proprietary formats from the large vendors (Aperio, BioImagene, etc). Most seem to consist of a pyramidal zoomed (scanned at different microscopic objectives, of course), large tiff files containing multiple tiled tiffs or compressed (JPEG or JPEG2000) images.
The industry standard is DICOM Sup 145; getting vendors to adopt it though has been sluggish, but inventing yet another format would probably not be helpful.
PNG might work for you. It can handle large images, metadata, and the PNG format can have some interlacing, so you can get up to (down to?) an n/8 x n/8 downsampled image pretty easily.
I'm not sure if PNG can do rapid random access. It is chunked, but that might not be enough.
You could represent sparse data with the transparency channel.
JPEG2000 might be worth a look, some interesting efforts from National libraries in this space.

Resources