The image http://www.barre.nom.fr/medical/samples/files/MR-MONO2-16-head.gz on http://www.barre.nom.fr/medical/samples/is not converting to other image formats. I tried following commands (after extracting the dicom file):
dcm2pnm --write-png MR-MONO2-16-head out.png
dcm2pnm +obr MR-MONO2-16-head out.bmp
dcm2pnm MR-MONO2-16-head out.pnm
It also did not work with dcmj2pnm and dcml2pnm. All of them just produce a gray box. The image otherwise is OK and is correctly read by proper dicom viewer softwares. Where is the problem and how can it be solved?
The problem is that no windowing settings are present in the header. You need to instruct dcm2pnm to calculate a window from the histogram (+Wm) or specify windowing values to apply.
dcm2pnm +Wm +obr MR-MONO2-16-head MR-MONO2-16-head.bmp
yields a bitmap image that looks fine to me.
Related
We are currently converting TIFF files to PDF using GraphicsMagick. The TIFF is coming from an eFAX and has a (pixel) resolution of 1728x2200.
If you do the conversion with tiff2pdf or just open it on Preview and convert export it to PDF, it is generated with a MediaBox value of 612x792 point, which is what is expected.
However graphics magick generates a MediaBox of 1728x4400 and a CropBox of 610x792. It all looks good if you open it on a PDF viewer because it's using the CropBox but if you're feeding it to GhostScript after, you don't get the Image on the full page but as a small square inside the document.
The lazy solutions would be to change for Tiff2PDF or add -dUseCropBox to our GhostScript command but I'd like to know what GraphicsMagick option should be used to have the PDF with the good MediaBox. It's like it doesn't understand that the resolution is in Pixels and not in Point. Hope somebody has insights
I am using OpenCV 3.0 and whenever I read an image and write it back the result is a washed-out image.
code:
cv::Mat img = cv::imread("dir/frogImage.jpg",-1);
cv::imwrite("dir/result.jpg",img);
Does anyone know whats causing this?
Original:
Result:
You can try to increase the compression quality parameter as shown in OpenCV Documentation of cv::imwrite :
cv::Mat img = cv::imread("dir/frogImage.jpg",-1);
std::vector<int> compression_params;
compression_params.push_back(CV_IMWRITE_JPEG_QUALITY);
compression_params.push_back(100);
cv::imwrite("dir/result.jpg",img, compression_params);
Without specifying the compression quality manually, quality of 95% will be applied.
but 1. you don't know what jpeg compression quality your original image had (so maybe you might increase the image size) and 2. it will (afaik) still introduce additional minor artifacts, because after all it is a lossy compression method.
UPDATE your problem seems to be not because of compression artifacts but because of an image with Adobe RGB 1998 color format. OpenCV interprets the color values as they are, but instead it should scale the color values to fit the "real" RGB color space. Browser and some image viewers do apply the color format correctly, while others don't (e.g. irfanView). I used GIMP to verify. Using GIMP you can decide on startup how to interpret the color values by format, either getting your desired or your "washed out" image.
OpenCV definitely doesn't care about such things, since it's not a photo editing library, so neither on reading nor on writing, color format will be handled.
This is because you are saving the image as JPG. When doing this the OpenCV will compress the image.
try to save it as PNG or BMP and no difference will be exist.
However, the IMPORTANT QUESTION : I am loading the image as jpg and saving it as JPG. So, how there is a difference?!
Yes, this is because there is many not identical compression/decompression algorithms for JPG.
if you want to get into some details see this question:
Reading jpg file in OpenCV vs C# Bitmap
EDIT:
You can see what I mean exactly here:
auto bmp(cv::imread("c:/Testing/stack.bmp"));
cv::imwrite("c:/Testing/stack_OpenCV.jpg", bmp);
auto jpg_opencv(cv::imread("c:/Testing/stack_OpenCV.jpg"));
auto jpg_mspaint(cv::imread("c:/Testing/stack_mspaint.jpg"));
cv::imwrite("c:/Testing/stack_mspaint_opencv.jpg", jpg_mspaint);
jpg_mspaint=(cv::imread("c:/Testing/stack_mspaint_opencv.jpg"));
cv::Mat jpg_diff;
cv::absdiff(jpg_mspaint, jpg_opencv, jpg_diff);
std::cout << cv::mean(jpg_diff);
The Result:
[0.576938, 0.466718, 0.495106, 0]
As #Micha commented:
cv::Mat img = cv::imread("dir/frogImage.jpg",-1);
cv::imwrite("dir/result.bmp",img);
I was always annoyed when mspaint.exe did the same to jpeg images. Especially for the screenshots...it ruined them everytime.
I am using TCPDF to create PDF files converted from HTML input using it's writeHTML() function. However, images within the PDF have poor quality, while the original images have a high quality (as expected). The images are in PNG format. I already tried to use SetJPEGQuality(100), but that had no effect.
What is causing this?
Try using this:
$pdf->setImageScale(1.53);
http://sourceforge.net/projects/tcpdf/forums/forum/435311/topic/4831671
When using HTML to generate your PDFs you need to manually calculate the images dimensions by dividing it's original width and height by 1.53 and set the result as attributes.
For example, an image with dimensions of 200x100 pixels will become:
<img src="image.jpg" width="131" height="65" />
This is a nasty workaround and doesn't completely remove the blur, but the result is much better than without any scaling.
Try To convert your Image to JPG or JPEG first. Until Now, I DOnt have a problem to convert image with TCPDF. I Think TCPDF is powerfull, because it can convert arabic language too. I HAve try convert arabic font with fpdf n it still fail
Little Up.
I'd same quality problem and I solved it...
When you save your picture, do it in 8bits instead of 24bits and you will see a "beautiful anti-aliasing".
Here is an image:
This image is a simple black-to-transparent gradient saved in full RGBA PNG.
Here is the same image, converted to indexed-alpha PNG by GIMP (Photoshop produces the same result)
As you can see, the gradient is now half-opaque, half-transparent.
Here is the same image again, only this time it was converted to indexed-alpha PNG by a PHP script I wrote:
So my question is: Why are GIMP and Photoshop unable to support partial transparency in indexed images, when the PHP script clearly shows that such an image can be created with no problems?
Is there anything "wrong" with an image whose pallette contains alpha information?
A more programming-related question: Does this transparency in the last image work in Internet Explorer 6?
I've finally found the actual answer: There is a metadata entry that allows you to define the alpha value of each colour in the colour table. Most graphics programs don't make use of this, but it does exist and can be used, in particular by GD.
Another option besides fireworks is pngquant, a command line application that will convert a rgba png into an indexed png with transparency.
I found this post which talks some more about how to use it.
IE6 and earlier in windows does not support variable transparency PNGs without annoying workarounds. An indexed PNG will only show the fully opaque parts which usually works pretty well. A drop shadow would disappear but the opaque parts of the logo or icon would continue to show.
This page has a better explanation and instructions with more png compression and quantization tools: http://calendar.perfplanet.com/2010/png-that-works/
For the record, PNG does not literally support indexed images with an alpha channel. What is really happening is that PNG allows you to add additional colors to the color table (i.e. index) with alpha values in those colors... not a complete alpha channel. FWIW...
Yeah I know what you mean. Fireworks is the only image editing program that I know of that can create and edit PNG8+Alpha without problems. I wish more paint programs would support this format cause Fireworks is expensive!
I found a way in GIMP to create or convert an image with reduced color palette and alpha channel.
The trick is to add a mask to the layer.
Full steps to reproduce:
Have your image in one layer
Add a mask to the layer. Select Transfer layer's alpha channel.
Convert to Indexed (Image -> Mode -> Indexed...)
Save as PNG
Now your image has reduced colors and reduced size, but it keeps your smooth transparency.
I'm working on a Sidebar Gadget and cannot get my JPEGs to show up (PNGs work). When I try to open the file by itself in IE8 it doesn't work. Firefox, of course, can open it fine.
JPEG Details:
Dimensions: 1080X900
180 dpi
Bit depth 24
Color representation: uncalibrated
I've found some things talking about the images being compressed incorrectly (?) but I haven't been able to get it working...
Any clues?
IE8 drops support for CMYK JPEG and renders them as the infamous red X without so much as a warning.
If you have ImageMagick:
identify -verbose image.jpg
will show you the image colorspace. If it's CMYK, you can convert to RGB with:
convert broken.jpg -colorspace RGB fixed.jpg
If you need to do CMYK to RGB conversion on a whole batch of JPEG-images, this command may be helpful to you:
for i in *.jpg; do convert "$i" -colorspace RGB "$i"; done
PS: If you'd like to see what is going on, just add -verbose:
for i in *.jpg; do convert "$i" -colorspace RGB -verbose "$i"; done
I had a similar issue with IE8 not displaying two JPEG images. FF, Safari, Chrome all displayed them without complaint but IE acted as if the files were not there. I have no idea what was going on, but a quick image conversion to gif or png fixed the problem. Just another in a long line of confirmations that IE sucks.
Had similar problems with existing images, which will not show up in IE8.
Problem is, as converter42 says: CMYK-Images
Convert them to RGB colorspace and all is good
The Solution with the PNG is not the best, because PNG files can be MUUUCH larger than JPGS.
If you are using photoshop for creating the jpgs. Try the below.
Open the file and go to 'Image' menu
Go to Mode
Select RGB
Save and upload to server.
This should work.
Why are you dealing with the image at 180 dpi and not the 72dpi screen resolution? At screen resolution the image will be roughly double that size. Still, the size is manageable for any browser.
When creating a gadget, you should be using PNGs for all the elements of the gadgets. Are you having issues displaying JPEG photos?
Have you looked for the yellow bar at the top of IE that blocks certain suspicious content from being loaded (popups, activex, javascript, etc.)? If it appears, try telling it to "allow".
Lastly, what are you using to compress your images to JPEG?
EDIT: If you want to do batch conversion use the batch converter in photoshop or use the Actions panel to record the conversion process for a single image, then replay the action on an entire folder. Additionally, you can save this action to a "droplet" which is a small application containing the action that you can drop an image or folder on top to.
Alternatively, if you don't fell like learning Actions, XNView is an excellent image viewer and converter that supports something like 160 different image formats and can batch convert and batch rename huge lists of files.
I fixed this issue by opening the CMYK JPEG file in Windows Paint and then saving as a JPEG, which Paint encodes as RGB by default. Not a great solution because I'm sure that Paint's converter is not as robust as Photoshop's, but this can be a quick fix if the job needs to be done now and there's no access to the tools above.