I have a jpeg image 10,000px by 10,000px in Format24bppArgb 11MB, but after I clone it and save it as bmp in Format32bppArgb, it goes up to 500MB, any reason why?
let file = new Bitmap("planet.jpg")
let rect = new Rectangle(0, 0, file.Width, file.Height)
let img = file.Clone(rect, PixelFormat.Format32bppArgb)
img.Save("copy.bmp", ImageFormat.Bmp)
I went ahead an ran the following in FSI:
open System.Drawing
open System.Drawing.Imaging
let file = new Bitmap("planet.jpg")
let rect = new Rectangle(0, 0, file.Width, file.Height)
let img = file.Clone(rect, PixelFormat.Format32bppArgb)
img.Save("copy.bmp", ImageFormat.Bmp)
I used a blank 10,000px by 10,000px JPG file clocking in at 1,563,127 bytes according to Windows Explorer. The resulting copy.bmp is 400,000,054 bytes, which is indeed a lot bigger than the input.
As #PaulAbbott said, this is because there's no compression in bitmap files. A 10,000 by 10,000 image at 32 bits per pixel will end up being stored as 3,200,000,000 bits, which is exactly 400,000,000 bytes or roughly 400 MB - which is around the size you posted.
If the file you got really was 500 MB and not around 400 MB, could you try hosting it somewhere? I'd like to take a look at it and see if I'm missing something here.
Related
In Tensorflow decode_png and decode_jpg are used to decode data read in from a source (file, url,…).
I have medical type images in a 8192x8192 gray scale 16 bit uint format and don’t need png or jpg decoding. These medical images are external files.
Over the last week I’ve tried numerous approaches, sifted through stack overflow, Github and Tensorflow web pages but have not been able to resolve the issue of not being able to modify the code to read in non png or jpg data. I’m modifying the pix2pix.py code:
input_paths = glob.glob(os.path.join(a.input_dir, "*.jpg"))
decode = tf.image.decode_jpeg
if len(input_paths) == 0:
input_paths = glob.glob(os.path.join(a.input_dir, "*.png"))
decode = tf.image.decode_png
path_queue = tf.train.string_input_producer(input_paths, shuffle=a.mode == "train")
reader = tf.WholeFileReader()
paths, contents = reader.read(path_queue)
raw_input = decode(contents) #
raw_input = tf.image.convert_image_dtype(raw_input, dtype=tf.float32)
I’ve tried numerous approaches to get through/by/around: “raw_input = decode(contents)“
Thanks in advance for any help/pointers.
I am using the following Java code to read a Dicom image, trying to convert it later to JPEG file. When the reading happens in the line
tempImage = ImageIO.read(dicomFile);
, the returned image either has an image type of 10 or something else, like 0 or 11. The problem here is that the reading happens sporadically. Sometimes the returned image type is 10, and sometimes it is not.
When the returned image type is 10, the writing of the converted JPEG file succeeds and returns true and I get my JPEG file. However, when the returned image type is not 10, the writing fails and returns false, and doesn't produce any file. This is the statement I am using for writing:
writerReturn = ImageIO.write(image, "jpeg", new File(tempLocation + studyId + File.separator + seriesUID + File.separator + objectId + thumbnail+ ".jpeg"));
I have spent long time trying to figure out why this sporadic behaviour is happening but couldn't reach to anything. Could you please help?
I am guessing the issue is that your input image is 16bits while I am sure your code only accept 8bits input. You cannot write out using the so-called usual JPEG 8bits lossy format unless you transform your 16bits input.
On my box here is what I see:
$ gdcminfo 1.2.840.113619.2.67.2200970061.29232060605151433.387
MediaStorage is 1.2.840.10008.5.1.4.1.1.1.1 [Digital X-Ray Image Storage - For Presentation]
TransferSyntax is 1.2.840.10008.1.2.4.90 [JPEG 2000 Image Compression (Lossless Only)]
NumberOfDimensions: 2
Dimensions: (1887,1859,1)
SamplesPerPixel :1
BitsAllocated :16
BitsStored :14
HighBit :13
PixelRepresentation:0
ScalarType found :UINT16
PhotometricInterpretation: MONOCHROME2
PlanarConfiguration: 0
TransferSyntax: 1.2.840.10008.1.2.4.90
Group 0x6000
Rows 1859
Columns 1887
NumberOfFrames 0
Description
Type G
Origin[2] 1,1
FrameOrigin 0
BitsAllocated 1
BitPosition 0
Origin: (0,0,0)
Spacing: (0.187429,0.187429,1)
DirectionCosines: (1,0,0,0,1,0)
Rescale Intercept/Slope: (0,1)
Orientation Label: AXIAL
So if you want to convince yourself you could extract the encapsulated JPEG 2000 bytestream:
$ gdcmraw 1.2.840.113619.2.67.2200970061.29232060605151433.387 bug.j2k
$ file bug.j2k
bug.j2k: JPEG 2000 codestream
I was able to open the generated bug.j2k using either IrfanView and kdu_show, but as you can see the image is very dark (only the lower bits are read).
From extra information in the comments, we have discovered that the application is running in a Glassfish server, and that there are two ImageIO plugins installed, both capable of reading DICOM images. The problem is not really related to reading, but sometimes writing the decoded image to JPEG fails.
The service providers for mentioned plugins are org.dcm4cheri.imageio.plugins.DcmImageReaderSpi and
org.dcm4che2.imageioimpl.plugins.dcm.DicomImageReaderSpi, but only the latter (DicomImageReaderSpi) seems to work. This is because it produces an 8 bits per sample BufferedImage, which is what the JPEGImageWriter is able to write (the DcmImageReaderSpi creates a 16 bit per sample image, which can't be written as a JFIF JPEG and thus unsupported by the JPEGImageWriter).
Because of the (by default) unspecified (read: unpredictable) order of ImageIO plugins, the result is that sometimes you get the 8 bps version and sometimes the 16 bit version of the image, and the end result is that sometimes the conversion won't work.
Now, the good news is that we can set an explicit order of ImageIO plugins, or we can unregister plugins at runtime, to get a stable predictable result. What is the better of these options, depends on wether there are other code on your server that depends on the undesired plugin or not. If you don't need it, unregister it.
The code below shows both of the above options:
// Get the global registry
IIORegistry registry = IIORegistry.getDefaultInstance();
// Lookup the known providers
ImageReaderSpi goodProvider = lookupProviderByName(registry, "org.dcm4che2.imageioimpl.plugins.dcm.DicomImageReaderSpi");
ImageReaderSpi badProvider = lookupProviderByName(registry, "org.dcm4cheri.imageio.plugins.DcmImageReaderSpi");
if (goodProvider != null && badProvider != null) {
// If both are found, EITHER
// order the good provider BEFORE the bad one
registry.setOrdering(ImageReaderSpi.class, goodProvider, badProvider);
// OR
// un-register the bad provider
registry.deregisterServiceProvider(badProvider);
}
// New and improved (shorter) version. :-)
private static <T> T lookupProviderByName(final ServiceRegistry registry, final String providerClassName) {
try {
return (T) registry.getServiceProviderByClass(Class.forName(providerClassName));
}
catch (ClassNotFoundException ignore) {
return null;
}
}
You should also make sure you run this code only once, for a container-based application, a good time is at application context start-up.
With the above solution, ImageIO.read(...) will always use the good plugin, and ImageIO.write(...) will work as expected.
I would like to compress an image when its size is bigger than an amount (i.e. 1000000 - 1MB). I see compress is used with write method in Image object. I have the image in memory and I don´t want to write it in my server but in an Amazon S3.
This code is working. It´s uploading the image to my s3 path, however I would like to include the filesize check and the compress:
photo = Magick::Image.read(mmkResourceImage.href).first
s3 = AWS::S3.new
bucket = s3.buckets[bucketName]
obj = bucket.objects[key]
obj.write(photo)
Please, let me know if there is another approach to accomplish this.
To get the current size you can use Magick::Image#to_blob which will return a binary string. You can get the string's size with String#bytesize. You can scale it down with Magick::Image#scale!.
Here is an example that would resize it by 75% until it was under a 1,000,000 bytes.
while photo.to_blob.bytesize > 1_000_000
photo.scale!(0.75)
end
Here is an example that would reduce the quality instead
(0..100).step(5).reverse_each do |quality|
blob = photo.to_blob { self.quality = quality }
if blob.bytesize <= 1_000_000
photo = Magick::Image.from_blob(blob).first
break
end
end
I'm altering someone else's code. They used PNG's which are loaded via BufferedImage. I need to load a TGA instead, which is just simply a 18 byte header and BGR codes. I have the textures loaded and running, but I get a gray box instead of the texture. I don't even know how to DEBUG this.
Textures are loaded in a ByteBuffer:
final static int datasize = (WIDTH*HEIGHT*3) *2; // Double buffer size for OpenGL // not +18 no header
static ByteBuffer buffer = ByteBuffer.allocateDirect(datasize);
FileInputStream fin = new FileInputStream("/Volumes/RAMDisk/shot00021.tga");
FileChannel inc = fin.getChannel();
inc.position(18); // skip header
buffer.clear(); // prepare for read
int ret = inc.read(buffer);
fin.close();
I've followed this: [how-to-manage-memory-with-texture-in-opengl][1] ... because I am updating the texture once per frame, like video.
Called once:
GL11.glBindTexture(GL11.GL_TEXTURE_2D, textureID);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_S, GL11.GL_CLAMP);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_T, GL11.GL_CLAMP);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER, GL11.GL_NEAREST);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MIN_FILTER, GL11.GL_NEAREST);
GL11.glTexImage2D(GL11.GL_TEXTURE_2D, 0, GL11.GL_RGB, width, height, 0, GL11.GL_RGB, GL11.GL_UNSIGNED_BYTE, (ByteBuffer) null);
assert(GL11.GL_NO_ERROR == GL11.glGetError());
Called repeatedly:
GL11.glBindTexture(GL11.GL_TEXTURE_2D, textureID);
GL11.glTexSubImage2D(GL11.GL_TEXTURE_2D, 0, 0, 0, width, height, GL11.GL_RGB, GL11.GL_UNSIGNED_BYTE, byteBuffer);
assert(GL11.GL_NO_ERROR == GL11.glGetError());
return textureID;
The render code hasn't changed and is based on:
GL11.glDrawArrays(GL11.GL_TRIANGLES, 0, this.vertexCount);
Make sure you set the texture sampling mode. Especially min filter: glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR). The default setting is mip mapped (GL_NEAREST_MIPMAP_LINEAR) so unless you upload mip maps you will get a white read result.
So either set the texture to no mip or generate them. One way to do that is to call glGenerateMipmap after the tex img call.
(see https://www.khronos.org/opengles/sdk/docs/man/xhtml/glTexParameter.xml).
It's a very common gl pitfall and something people just tend to know after getting bitten by it a few times.
There is no easy way to debug stuff like this. There are good gl debugging tools in for example xcode but they will not tell you about this case.
Debugging GPU code is always a hassle. I would bet my money on a big industry progress in this area as more companies discover the power of GPU. Until then; I'll share my two best GPU debugging friends:
1) Define a function to print OGL errors:
int printOglError(const char *file, int line)
{
/* Returns 1 if an OpenGL error occurred, 0 otherwise. */
GLenum glErr;
int retCode = 0;
glErr = glGetError();
while (glErr != GL_NO_ERROR) {
printf("glError in file %s # line %d: %s\n", file, line, gluErrorString(glErr));
retCode = 1;
glErr = glGetError();
}
return retCode;
}
#define printOpenGLError() printOglError(__FILE__, __LINE__)
And call it after your render draw calls (possible earlier errors will also show up):
GL11.glDrawArrays(GL11.GL_TRIANGLES, 0, this.vertexCount);
printOpenGLError();
This alerts if you make some invalid operations (which might just be your case) but you usually have to find where the error occurs by trial and error.
2) Check out gDEBugger, free software with tons of GPU memory information.
[Edit]:
I would also recommend using the opensource lib DevIL - its quite competent in loading various image formats.
Thanks to Felix, by not calling glTexSubImage2D (leaving the memory valid, but uninitialized) I noticed a remnant pattern left by the default memory. This indicated that the texture is being displayed, but the load is most likely the problem.
**UPDATE:
The, problem with the code above is essentially the buffer. The buffer is 1024*1024, but it is only partially filled in by the read, leaving the limit marker of the ByteBuffer at 2359296(1024*768*3) instead of 3145728(1024*1024*3). This gives the error:
Number of remaining buffer elements is must be ... at least ...
I thought that OpenGL needed space to return data, so I doubled the size of the buffer.
The buffer size is doubled to compensate for the error.
final static int datasize = (WIDTH*HEIGHT*3) *2; // Double buffer size for OpenGL // not +18 no header
This is wrong, what is needed is the flip() function (Big THANKS to Reto Koradi for the small hint to the buffer rewind) to put the ByteBuffer in read mode. Since the buffer is only semi-full, the OpenGL buffer check gives an error. The correct thing to do is not double the buffer size; use buffer.position(buffer.capacity()) to fill the buffer before doing a flip().
final static int datasize = (WIDTH*HEIGHT*3); // not +18 no header
buffer.clear(); // prepare for read
int ret = inc.read(buffer);
fin.close();
buffer.position(buffer.capacity()); // make sure buffer is completely FILLED!
buffer.flip(); // flip buffer to read mode
To figure this out, it is helpful to hardcode the memory of the buffer to make sure the OpenGL calls are working, isolating the load problem. Then when the OpenGL calls are correct, concentrate on the loading of the buffer. As suggested by Felix K, it is good to make sure one texture has been drawn correctly before calling glTexSubImage2D repeatedly.
Some ideas which might cause the issue:
Your texture is disposed somewhere. I don't know the whole code but I guess somewhere there is a glDeleteTextures and this could cause some issues if called at the wrong time.
Are the texture width and height powers of two? If not this might be an issue depending on your hardware. Old hardware sometimes won't support non-power of two images.
The texture parameters changed between the draw calls at some other point ( Make a debug check of the parameters with glGetTexParameter ).
There could be a loading issue when loading the next image ( edit: or even the first image ). Check if the first image is displayed without loading the next images. If so it must be one of the cases above.
How to know the size of the picked image , or how to put a limit for the picked image , like Maximum size for Image is 1 MB , Here is my code
var openPicker = new Windows.Storage.Pickers.FileOpenPicker();
openPicker.ViewMode = Windows.Storage.Pickers.PickerViewMode.Thumbnail;
openPicker.SuggestedStartLocation = Windows.Storage.Pickers.PickerLocationId.DocumentsLibrary;
openPicker.FileTypeFilter.Add(".png");
openPicker.FileTypeFilter.Add(".Jpeg");
openPicker.FileTypeFilter.Add(".Jpg");
StorageFile file = await openPicker.PickSingleFileAsync();
Unfortunately there is no way to restrict the size of the picked file in advance.
After the file has been picked, you can call http://msdn.microsoft.com/en-us/library/windows/apps/windows.storage.storagefile.getbasicpropertiesasync.aspx to get its size.