I am using the following Java code to read a Dicom image, trying to convert it later to JPEG file. When the reading happens in the line
tempImage = ImageIO.read(dicomFile);
, the returned image either has an image type of 10 or something else, like 0 or 11. The problem here is that the reading happens sporadically. Sometimes the returned image type is 10, and sometimes it is not.
When the returned image type is 10, the writing of the converted JPEG file succeeds and returns true and I get my JPEG file. However, when the returned image type is not 10, the writing fails and returns false, and doesn't produce any file. This is the statement I am using for writing:
writerReturn = ImageIO.write(image, "jpeg", new File(tempLocation + studyId + File.separator + seriesUID + File.separator + objectId + thumbnail+ ".jpeg"));
I have spent long time trying to figure out why this sporadic behaviour is happening but couldn't reach to anything. Could you please help?
I am guessing the issue is that your input image is 16bits while I am sure your code only accept 8bits input. You cannot write out using the so-called usual JPEG 8bits lossy format unless you transform your 16bits input.
On my box here is what I see:
$ gdcminfo 1.2.840.113619.2.67.2200970061.29232060605151433.387
MediaStorage is 1.2.840.10008.5.1.4.1.1.1.1 [Digital X-Ray Image Storage - For Presentation]
TransferSyntax is 1.2.840.10008.1.2.4.90 [JPEG 2000 Image Compression (Lossless Only)]
NumberOfDimensions: 2
Dimensions: (1887,1859,1)
SamplesPerPixel :1
BitsAllocated :16
BitsStored :14
HighBit :13
PixelRepresentation:0
ScalarType found :UINT16
PhotometricInterpretation: MONOCHROME2
PlanarConfiguration: 0
TransferSyntax: 1.2.840.10008.1.2.4.90
Group 0x6000
Rows 1859
Columns 1887
NumberOfFrames 0
Description
Type G
Origin[2] 1,1
FrameOrigin 0
BitsAllocated 1
BitPosition 0
Origin: (0,0,0)
Spacing: (0.187429,0.187429,1)
DirectionCosines: (1,0,0,0,1,0)
Rescale Intercept/Slope: (0,1)
Orientation Label: AXIAL
So if you want to convince yourself you could extract the encapsulated JPEG 2000 bytestream:
$ gdcmraw 1.2.840.113619.2.67.2200970061.29232060605151433.387 bug.j2k
$ file bug.j2k
bug.j2k: JPEG 2000 codestream
I was able to open the generated bug.j2k using either IrfanView and kdu_show, but as you can see the image is very dark (only the lower bits are read).
From extra information in the comments, we have discovered that the application is running in a Glassfish server, and that there are two ImageIO plugins installed, both capable of reading DICOM images. The problem is not really related to reading, but sometimes writing the decoded image to JPEG fails.
The service providers for mentioned plugins are org.dcm4cheri.imageio.plugins.DcmImageReaderSpi and
org.dcm4che2.imageioimpl.plugins.dcm.DicomImageReaderSpi, but only the latter (DicomImageReaderSpi) seems to work. This is because it produces an 8 bits per sample BufferedImage, which is what the JPEGImageWriter is able to write (the DcmImageReaderSpi creates a 16 bit per sample image, which can't be written as a JFIF JPEG and thus unsupported by the JPEGImageWriter).
Because of the (by default) unspecified (read: unpredictable) order of ImageIO plugins, the result is that sometimes you get the 8 bps version and sometimes the 16 bit version of the image, and the end result is that sometimes the conversion won't work.
Now, the good news is that we can set an explicit order of ImageIO plugins, or we can unregister plugins at runtime, to get a stable predictable result. What is the better of these options, depends on wether there are other code on your server that depends on the undesired plugin or not. If you don't need it, unregister it.
The code below shows both of the above options:
// Get the global registry
IIORegistry registry = IIORegistry.getDefaultInstance();
// Lookup the known providers
ImageReaderSpi goodProvider = lookupProviderByName(registry, "org.dcm4che2.imageioimpl.plugins.dcm.DicomImageReaderSpi");
ImageReaderSpi badProvider = lookupProviderByName(registry, "org.dcm4cheri.imageio.plugins.DcmImageReaderSpi");
if (goodProvider != null && badProvider != null) {
// If both are found, EITHER
// order the good provider BEFORE the bad one
registry.setOrdering(ImageReaderSpi.class, goodProvider, badProvider);
// OR
// un-register the bad provider
registry.deregisterServiceProvider(badProvider);
}
// New and improved (shorter) version. :-)
private static <T> T lookupProviderByName(final ServiceRegistry registry, final String providerClassName) {
try {
return (T) registry.getServiceProviderByClass(Class.forName(providerClassName));
}
catch (ClassNotFoundException ignore) {
return null;
}
}
You should also make sure you run this code only once, for a container-based application, a good time is at application context start-up.
With the above solution, ImageIO.read(...) will always use the good plugin, and ImageIO.write(...) will work as expected.
Related
Hi I'm trying to capture a picture using kotlin and registerForActivityResult but I allways get a blur image with no quality I've read several post but I can't understand how to work with my application. I'm using a fragment to call the camera. Any suggestions? Sorry for my bad english I've spent about the full week trying it works. And nothing. Thanks in advance
private var imagenUri: Uri? =null
val startForResult = registerForActivityResult(ActivityResultContracts.StartActivityForResult()) {
result: ActivityResult ->
if (result.resultCode == Activity.RESULT_OK) {
try {
val intent = result.data
intent!!.putExtra(MediaStore.EXTRA_OUTPUT, imagenUri)
val bitMap = intent?.extras?.get("data") as Bitmap
imagenUri= getImageUriFromBitmap(requireContext(),bitMap)
binding.ivImagen.setImageURI(imagenUri)
Toast.makeText(context, "la uri es: $imagenUri", Toast.LENGTH_SHORT).show()
} catch (e: java.lang.Exception){
Toast.makeText(context, "NO SE HA PODIDO ENCONTRAR IMAGEN", Toast.LENGTH_SHORT).show()}
}
}
binding.ibTomarFoto.setOnClickListener(){
startForResult.launch(Intent(MediaStore.ACTION_IMAGE_CAPTURE))
}
From the documentation:
public static final String ACTION_IMAGE_CAPTURE
Standard Intent action that can be sent to have the camera application capture an image and return it.
The caller may pass an extra EXTRA_OUTPUT to control where this image will be written. If the EXTRA_OUTPUT is not present, then a small sized image is returned as a Bitmap object in the extra field. This is useful for applications that only need a small image. If the EXTRA_OUTPUT is present, then the full-sized image will be written to the Uri value of EXTRA_OUTPUT.
So you need to add the EXTRA_OUTPUT extra to get a full-size image stored at a URI you supply. Otherwise you get a small image as a data payload in the result Intent (those bundles can't handle large objects).
It looks like you're already trying to do that, you've just added it to the wrong place - you need to add it to the Intent you call launch with, not the result one. It's a configuration option for the task you're launching!
So this should work:
binding.ibTomarFoto.setOnClickListener(){
startForResult.launch(
Intent(MediaStore.ACTION_IMAGE_CAPTURE).putExtra(MediaStore.EXTRA_OUTPUT, imagenUri)
)
}
And then remove the same putExtra line from your result-handler code (it doesn't do anything, but there's no point having it there)
In Tensorflow decode_png and decode_jpg are used to decode data read in from a source (file, url,…).
I have medical type images in a 8192x8192 gray scale 16 bit uint format and don’t need png or jpg decoding. These medical images are external files.
Over the last week I’ve tried numerous approaches, sifted through stack overflow, Github and Tensorflow web pages but have not been able to resolve the issue of not being able to modify the code to read in non png or jpg data. I’m modifying the pix2pix.py code:
input_paths = glob.glob(os.path.join(a.input_dir, "*.jpg"))
decode = tf.image.decode_jpeg
if len(input_paths) == 0:
input_paths = glob.glob(os.path.join(a.input_dir, "*.png"))
decode = tf.image.decode_png
path_queue = tf.train.string_input_producer(input_paths, shuffle=a.mode == "train")
reader = tf.WholeFileReader()
paths, contents = reader.read(path_queue)
raw_input = decode(contents) #
raw_input = tf.image.convert_image_dtype(raw_input, dtype=tf.float32)
I’ve tried numerous approaches to get through/by/around: “raw_input = decode(contents)“
Thanks in advance for any help/pointers.
I'm trying to write a code in Image J that will:
Open all images in separate windows that contains "488" within a folder
Use look up tables to convert images to green and RGB color From ImageJ, the commands are: run("Green"); and run("RGB Color");
Adjust the brightness and contrast with defined values for Min and Max (same values for each image).
I know that the code for that is:
//run("Brightness/Contrast..."); setMinAndMax(value min, value max); run("Apply LUT");
Save each image in the same, original folder , in Tiff and with the same name but finishing with "processed".
I have no experience with Java and am very bad with coding. I tried to piece something together using code I found on stackoverflow and on the ImageJ website, but kept getting error codes. Any help is much appreciated!
I don't know if you still need it, but here is an example.
output_dir = "C:/Users/test/"
input_dir = "C:/Users/test/"
list = getFileList(input_dir);
listlength = list.length;
setBatchMode(true);
for (z = 0; z < listlength; z++){
if(endsWith(list[z], 'tif')==true ){
if(list[z].contains("488")){
title = list[z];
end = lengthOf(title)-4;
out_path = output_dir + substring(title,0,end) + "_processed.tif";
open(input_dir + title);
//add all the functions you want
run("Brightness/Contrast...");
setMinAndMax(1, 15);
run("Apply LUT");
saveAs("tif", "" + out_path + "");
close();
};
run("Close All");
}
}
setBatchMode(false);
I think it contains all the things you need. It opens all the images (in specific folder) that ends with tif and contains 488. I didn't completely understand what you want to do with each photo, so I just added your functions. But you probably won't have problems with adding more/different since you can get them with macro recorder.
And the code is written to open tif files. If you have tiff just be cerful that you change that and also change -4 to -5.
I am using matlab 2013a software for my project.
I face a problem while splitting video into individual frames.
I want to know how to get frames at a specific intervals from video.. i.e., i want to grab frames at the rate of one frame per second(frame/sec) .My input video has 50 frames/sec. In the code I have used step() to slice the video into frames.
The following is my code , basically a face detection code(detects multiple faces in a video) . This code captures every frame in the video(i.e 50fp approx) and processes it. I want to process frames at the rate of 1 fps. Please help me.
clear classes;
videoFileReader = vision.VideoFileReader('C:\Users\Desktop\project\05.mp4');
**videoFrame = step(videoFileReader);**
faceDetector = vision.CascadeObjectDetector(); % Finds faces by default
tracker = MultiObjectTrackerKLT;
videoPlayer = vision.VideoPlayer('Position',[200 100 fliplr(frameSize(1:2)+30)]);
bboxes = [];
while isempty(bboxes)
**framergb = step(videoFileReader);**
frame = rgb2gray(framergb);
bboxes = faceDetector.step(frame);
end
tracker.addDetections(frame, bboxes);
frameNumber = 0;
keepRunning = true;
while keepRunning
**framergb = step(videoFileReader);**
frame = rgb2gray(framergb);
if mod(frameNumber, 10) == 0
bboxes = 2 * faceDetector.step(imresize(frame, 0.5));
if ~isempty(bboxes)
tracker.addDetections(frame, bboxes);
end
else
% Track faces
tracker.track(frame);
end
end
%% Clean up
release(videoPlayer);
But this actually considers every frame. I want to grab 1fps.
It cannot be done directly in Matlab 2013a, because the video access library does not provide the feature you want. Writing the necessary code to implement an efficient frame skipping routine is not really possible using just Matlab code (you would need to look inside the video libraries)
Working around it, you have two basic options:
Do as little work as possible on frames that you do not want to process.
Where you currently have
framergb = step(videoFileReader);
Instead do something like
for i=1:49,
step(videoFileReader);
end
framergb = step(videoFileReader);
(NB this does not allow for going beyond end of input)
Pre-process your file with a tool like ffmpeg, and reduce the frame-rate before you use Matlab.
The ffmpeg command might look something like this:
ffmpeg -i 05.mp4 -r 1 05_at_1fps.mp4
I'm getting a CVImageBufferRef from my AVCaptureSession, and I'd like to take that image buffer and upload it over the network. To save space and time, I would like to do this without rendering the image into a CIImage and NSBitmapImage, which is the solution I've seen everywhere (like here: How can I obtain raw data from a CVImageBuffer object).
This is because my impression is that the CVImageBuffer might be compressed, which is awesome for me, and if I render it, I have to uncompress it into a full bitmap and then upload the whole bitmap. I would like to take the compressed data (realizing that a single compressed frame might be unrenderable later by itself) just as it sits within the CVImageBuffer. I think this means I want the CVImageBuffer's base data pointer and its length, but it doesn't appear there's a way to get that within the API. Anybody have any ideas?
CVImageBuffer itself is an abstract type. Your image should be an instance of either CVPixelBuffer, CVOpenGLBuffer, or CVOpenGLTexture. The documentation for those types lists the functions you can use for accessing the data.
To tell which type you have use the GetTypeID methods:
CVImageBufferRef image = …;
CFTypeID imageType = CFGetTypeID(image);
if (imageType == CVPixelBufferGetTypeID()) {
// Pixel Data
}
else if (imageType == CVOpenGLBufferGetTypeID()) {
// OpenGL pbuffer
}
else if (imageType == CVOpenGLTextureGetTypeID()) {
// OpenGL Texture
}