Is it possible to get the width and height of a .gif file in scala? [duplicate] - image

I am able to read png file. But getting ArrayIndexOutOfBoundsException: 4096 while reading gif file.
byte[] fileData = imageFile.getFileData();
ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(fileData);
RenderedImage image = ImageIO.read(byteArrayInputStream)
Exception thrown looks like
java.lang.ArrayIndexOutOfBoundsException: 4096
at com.sun.imageio.plugins.gif.GIFImageReader.read(Unknown Source)
at javax.imageio.ImageIO.read(Unknown Source)
at javax.imageio.ImageIO.read(Unknown Source)
what could be the issue and what is the resolution?

Update 3: Solution
I ended up developing my own GifDecoder and released it as open source under the Apache License 2.0. You can get it from here: https://github.com/DhyanB/Open-Imaging. It does not suffer from the ArrayIndexOutOfBoundsException issue and delivers decent performance.
Any feedback is highly appreciated. In particular, I'd like to know if it works correctly for all of your images and if you are happy with its speed.
I hope this is helpful to you (:
Initial answer
Maybe this bug report is related to or describes the same problem: https://bugs.openjdk.java.net/browse/JDK-7132728.
Quote:
FULL PRODUCT VERSION :
java version "1.7.0_02"
Java(TM) SE Runtime Environment (build 1.7.0_02-b13)
Java HotSpot(TM) 64-Bit Server VM (build 22.0-b10, mixed mode)
ADDITIONAL OS VERSION INFORMATION :
Microsoft Windows [Version 6.1.7601]
A DESCRIPTION OF THE PROBLEM :
according to specification
http://www.w3.org/Graphics/GIF/spec-gif89a.txt
> There is not a requirement to send a clear code when the string table is full.
However, GIFImageReader requires the clear code when the string table is full.
GIFImageReader violates the specification, clearly.
In the real world, sometimes people finds such high compressed gif image.
so you should fix this bug.
STEPS TO FOLLOW TO REPRODUCE THE PROBLEM :
javac -cp .;PATH_TO_COMMONS_CODEC GIF_OverflowStringList_Test.java
java -cp .;PATH_TO_COMMONS_CODEC GIF_OverflowStringList_Test
EXPECTED VERSUS ACTUAL BEHAVIOR :
EXPECTED -
complete normally. no output
ACTUAL -
ArrayIndexOutOfBounds occurs.
ERROR MESSAGES/STACK TRACES THAT OCCUR :
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 4096
at com.sun.imageio.plugins.gif.GIFImageReader.read(GIFImageReader.java:1
075)
at javax.imageio.ImageIO.read(ImageIO.java:1400)
at javax.imageio.ImageIO.read(ImageIO.java:1322)
at GIF_OverflowStringList_Test.main(GIF_OverflowStringList_Test.java:8)
REPRODUCIBILITY :
This bug can be reproduced always.
The bug report also provides code to reproduce the bug.
Update 1
And here is an image that causes the bug in my own code:
Update 2
I tried to read the same image using Apache Commons Imaging, which led to the following exception:
java.io.IOException: AddStringToTable: codes: 4096 code_size: 12
at org.apache.commons.imaging.common.mylzw.MyLzwDecompressor.addStringToTable(MyLzwDecompressor.java:112)
at org.apache.commons.imaging.common.mylzw.MyLzwDecompressor.decompress(MyLzwDecompressor.java:168)
at org.apache.commons.imaging.formats.gif.GifImageParser.readImageDescriptor(GifImageParser.java:388)
at org.apache.commons.imaging.formats.gif.GifImageParser.readBlocks(GifImageParser.java:251)
at org.apache.commons.imaging.formats.gif.GifImageParser.readFile(GifImageParser.java:455)
at org.apache.commons.imaging.formats.gif.GifImageParser.readFile(GifImageParser.java:435)
at org.apache.commons.imaging.formats.gif.GifImageParser.getBufferedImage(GifImageParser.java:646)
at org.apache.commons.imaging.Imaging.getBufferedImage(Imaging.java:1378)
at org.apache.commons.imaging.Imaging.getBufferedImage(Imaging.java:1292)
That looks very similar to the problem we have with ImageIO, so I reported the bug at the Apache Commons JIRA: https://issues.apache.org/jira/browse/IMAGING-130.

I encountered the exact same problem you did, but I had to stick to an ImageIO interface, which no other library did. Apart from Jack's great answer, I simply patched the existing GIFImageReader class with a few lines of code, and got it marginally working.
Copy this link into PatchedGIFImageReader.java and use as such:
reader = new PatchedGIFImageReader(null);
reader.setInput(ImageIO.createImageInputStream(new FileInputStream(files[i])));
int ub = reader.getNumImages(true);
for (int x=0;x<ub;x++) {
BufferedImage img = reader.read(x);
//Do whatever with the new img bufferedimage
Be sure to change the package name to whatever you're using.
Unfortunately results may vary, as the patch was a 1 minute bugfix that basically just exits the loop if it goes past the buffer. Some gifs it loads fine, others have a few visual artifacts.
Such is life. If anyone knows a better fix instead of mine, please do tell.

Related

Unpredictable CUDNN_STATUS_NOT_INITIALIZED on Windows

I am running keras neural network training and prediction on GTX 1070 on Windows 10. Most times it is working, but from time to time it complains
E c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\cuda\cuda_dnn.cc:359] could not create cudnn handle: CUDNN_STATUS_NOT_INITIALIZED
E c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\cuda\cuda_dnn.cc:366] error retrieving driver version: Unimplemented: kernel reported driver version not implemented on Windows
E c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\cuda\cuda_dnn.cc:326] could not destroy cudnn handle: CUDNN_STATUS_BAD_PARAM
F c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\kernels\conv_ops.cc:659] Check failed: stream->parent()->GetConvolveAlgorithms(&algorithms)
It cannot be explained neither by literally error meaning nor by OOM error.
How to fix?
Try limiting your gpu usage with set gpu option per_process_gpu_memory_fraction.
Fiddle around with it to see what works and what doesn't.
I recommend using .7 as a starting baseline.
I met the problem sometimes on Windows10 and Keras.
Reboot solve the problem for a short time, but happen again.
I refer to https://github.com/fchollet/keras/issues/1538
import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.3
set_session(tf.Session(config=config))
the settings solve the halt problem.
Got the solution for this problem.
I had the same problem on Windows 10 with Nvidia GEforce 920M.
Search for the correct version of cudnn library. If the version is not compatable with the CUDA version it won't throw the error while tensorflow installation but will interfere during memory allocation in the GPU.
DO check your CUDA and CUDNN versions. Also follow the instructions about creation of sessions mentioned above.
Finally the issue is now resolved for me, I spent many hours struggling with this.
I recommend follow all the steps of installation properly as mentioned in
links
TensorFlow-
https://www.tensorflow.org/install/install_windows
and for CuDNN -
https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html#install-windows
for me this wasn't enough, I tried updating my GeForce Game Ready Driver from GeForce Experience window, and after restart it started working for me.
GeForce Experience
the driver can also be downloaded from link https://www.geforce.com/drivers
Similar to what other people are saying, enabling memory growth for your GPUs can resolve this issue.
The following works for me by adding to the beginning of the training script:
# Using Tensorflow-2.4.x
import tensorflow as tf
try:
tf_gpus = tf.config.list_physical_devices('GPU')
for gpu in tf_gpus:
tf.config.experimental.set_memory_growth(gpu, True)
except:
pass
the tf doku help me a lot Allowing GPU memory growth
The first is the allow_growth option, which attempts to allocate only as much GPU memory based on runtime allocations: it starts out allocating very little memory, and as Sessions get run and more GPU memory is needed, we extend the GPU memory region needed by the TensorFlow process. Note that we do not release memory, since that can lead to even worse memory fragmentation. To turn this option on, set the option in the ConfigProto by:
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config, ...)
or
with tf.Session(graph=graph_node, config=config) as sess:
...
The second method is the per_process_gpu_memory_fraction option, which determines the fraction of the overall amount of memory that each visible GPU should be allocated. For example, you can tell TensorFlow to only allocate 40% of the total memory of each GPU by:
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.4
session = tf.Session(config=config, ...)

Nashorn JSON.parse() - java.lang.OutOfMemoryError: Java heap space - JDK8u60

Nashron Release notes claims they fixed the JSON parser bugs, but I am still able to produce a (different) bug on new patch 8u60. This time it is OutOfMemoryError.
Refer the attached JSON [1] (it is typically a Category & Subcategory relation). When I try to invoke JSON.parse() it is failing.
[1] http://jsfiddle.net/manivannandsekaran/rfftavkz/
I tried to increase the heap size, didn't help, instead of getting
the OOM Exception quickly, it delayed bit.
When I replace all the integer key with Alpahnumberic, the entire
parsing time is super fast. [2]
[2] https://jsfiddle.net/manivannandsekaran/8yw3ojmu/
It is almost 4 months we waited to get the original bug fixed, now again the new path introduced a another bug (it is really frustrating, I am not sure how these bugs are get escaped from regression). Is there any workaround available? Is it possible to override default JSON parser with other well known JSON parsers (like GSON or Jackson).
Here the stack trace of failure from JJS.
jjs> load("catsubcat/test.js")
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at jdk.nashorn.internal.runtime.arrays.IntArrayData.toObjectArray(IntArrayData.java:138)
at jdk.nashorn.internal.runtime.arrays.IntArrayData.convertToObject(IntArrayData.java:180)
at jdk.nashorn.internal.runtime.arrays.IntArrayData.convert(IntArrayData.java:192)
at jdk.nashorn.internal.runtime.arrays.IntArrayData.set(IntArrayData.java:243)
at jdk.nashorn.internal.runtime.arrays.ArrayFilter.set(ArrayFilter.java:99)
at jdk.nashorn.internal.runtime.arrays.DeletedRangeArrayFilter.set(DeletedRangeArrayFilter.java:144)
at jdk.nashorn.internal.parser.JSONParser.addArrayElement(JSONParser.java:246)
at jdk.nashorn.internal.parser.JSONParser.parseObject(JSONParser.java:210)
at jdk.nashorn.internal.parser.JSONParser.parseLiteral(JSONParser.java:165)
at jdk.nashorn.internal.parser.JSONParser.parseObject(JSONParser.java:207)
at jdk.nashorn.internal.parser.JSONParser.parseLiteral(JSONParser.java:165)
at jdk.nashorn.internal.parser.JSONParser.parseObject(JSONParser.java:207)
at jdk.nashorn.internal.parser.JSONParser.parseLiteral(JSONParser.java:165)
at jdk.nashorn.internal.parser.JSONParser.parse(JSONParser.java:148)
at jdk.nashorn.internal.runtime.JSONFunctions.parse(JSONFunctions.java:80)
at jdk.nashorn.internal.objects.NativeJSON.parse(NativeJSON.java:105)
at java.lang.invoke.LambdaForm$DMH/1880587981.invokeStatic_L3_L(LambdaForm$DMH)
at java.lang.invoke.LambdaForm$BMH/1095293768.reinvoke(LambdaForm$BMH)
at java.lang.invoke.LambdaForm$MH/1411892748.exactInvoker(LambdaForm$MH)
at java.lang.invoke.LambdaForm$MH/22805895.linkToCallSite(LambdaForm$MH)
at jdk.nashorn.internal.scripts.Script$5$test.:program(file:catsubcat/test.js:1)
at java.lang.invoke.LambdaForm$DMH/1323165413.invokeStatic_LL_L(LambdaForm$DMH)
at java.lang.invoke.LambdaForm$MH/653687670.invokeExact_MT(LambdaForm$MH)
at jdk.nashorn.internal.runtime.ScriptFunctionData.invoke(ScriptFunctionData.java:640)
at jdk.nashorn.internal.runtime.ScriptFunction.invoke(ScriptFunction.java:228)
at jdk.nashorn.internal.runtime.ScriptRuntime.apply(ScriptRuntime.java:393)
at jdk.nashorn.internal.runtime.Context.evaluateSource(Context.java:1219)
at jdk.nashorn.internal.runtime.Context.load(Context.java:841)
at jdk.nashorn.internal.objects.Global.load(Global.java:1536)
at java.lang.invoke.LambdaForm$DMH/1323165413.invokeStatic_LL_L(LambdaForm$DMH)
at java.lang.invoke.LambdaForm$BMH/1413378318.reinvoke(LambdaForm$BMH)
at java.lang.invoke.LambdaForm$reinvoker/40472007.dontInline(LambdaForm$reinvoker)
The problem is just that Nashorn switches to sparse array representation too late. I filed a bug for this: https://bugs.openjdk.java.net/browse/JDK-8137281

LoadLibrary() fails with error 8 (ERROR_NOT_ENOUGH_MEMORY)

Later edit: After more investigation, the Windows Updates and the OpenGL DLL were red herrings. The cause of these symptoms was a LoadLibrary() call failing with GetLastError() == ERROR_NOT_ENOUGH_MEMORY. See my answer for how to solve such issues. Below is the original question for historical interest. /edit
A map viewer I wrote in Python/wxPython for Windows with a C++ backend suddenly
stopped working, without any code changes or even recompiling. The very same
executables had been working for weeks before (same Python, same DLLs, ...).
Now, when querying Windows for a pixel format to use with OpenGL (with
ChoosePixelFormat()), I get a MessageBox saying:
LoadLibrary failed with error 8:
Not enough storage is available to process this command
The error message is displayed when executing the following code fragment:
void DevContext::SetPixelFormat() {
PIXELFORMATDESCRIPTOR pfd;
memset(&pfd, 0, sizeof(pfd));
pfd.nSize = sizeof(pfd);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
int pf = ChoosePixelFormat(m_hdc, &pfd); // <-- ERROR OCCURS IN HERE
if (pf == 0) {
throw std::runtime_error("No suitable pixel format.");
}
if (::SetPixelFormat(m_hdc, pf, &pfd) == FALSE) {
throw std::runtime_error("Cannot set pixel format.");
}
}
It's actually an ATI GL driver DLL showing the message box. The relevant part of the call stack is this:
... More MessageBox stuff
0027e860 770cfcf1 USER32!MessageBoxTimeoutA+0x76
0027e880 770cfd36 USER32!MessageBoxExA+0x1b
*** ERROR: Symbol file not found. Defaulted to export symbols for C:\Windows\SysWOW64\atiglpxx.dll -
0027e89c 58471df1 USER32!MessageBoxA+0x18
0027e9d4 58472065 atiglpxx+0x1df1
0027e9dc 57acaf0b atiglpxx!DrvValidateVersion+0x13
0027ea00 57acb0f3 OPENGL32!wglSwapMultipleBuffers+0xc5e
0027edf0 57acb1a9 OPENGL32!wglSwapMultipleBuffers+0xe46
0027edf8 57acc6a4 OPENGL32!wglSwapMultipleBuffers+0xefc
0027ee0c 57ad5658 OPENGL32!wglGetProcAddress+0x45f
0027ee28 57ad5dd4 OPENGL32!wglGetPixelFormat+0x70
0027eec8 57ad6559 OPENGL32!wglDescribePixelFormat+0xa2
0027ef48 751c5ac7 OPENGL32!wglChoosePixelFormat+0x3e
0027ef60 57c78491 GDI32!ChoosePixelFormat+0x28
0027f0b0 57c7867a OutdoorMapper!DevContext::SetPixelFormat+0x71 [winwrap.cpp # 42]
0027f1a0 57ce3120 OutdoorMapper!OGLContext::OGLContext+0x6a [winwrap.cpp # 61]
0027f224 1e0acdf2 maplib_sip!func_CreateOGLDisplay+0xc0 [maps.sip # 96]
0027f240 1e0fac79 python33!PyCFunction_Call+0x52
... More Python stuff
I did a Windows Update two weeks ago and noticed some glitches (e.g. when
resizing the window), but my program still worked mostly OK. Just now I
rebooted, Windows installed 1 more update, and I don't get past
ChoosePixelFormat() any more. However, the last installed update was
KB2998527, a Russia timezone update?!
Things that I already checked:
Recompiling doesn't make it work.
Rebooting and running without other programs running doesn't work.
Memory consumption of my program is only 67 MB, I'm not out of memory.
Plenty of diskspace free (~50 GB).
The HDC m_hdc is obtained from the display panel's HWND and seems to be valid.
Changing my linker commandline doesn't work.
Should I update my graphics drivers or roll back the updates? Any other ideas?
System data dump: Windows 7 Ultimate SP1 x64, 4GB RAM; HP EliteBook 8470p; Python 3.3, wxPython 3.0.1.dev76673 msw (phoenix); access to C++ data structures via SIP 4.15.4; C++ code compiled with Visual Studio 2010 Express, Debug build with /MDd.
I was running out of virtual address space.
By default, LibTIFF reads TIF images by memory-mapping them (mmap() or CreateFileMapping()). This is fine for pictures of your wife, but it turns out it's a bad idea for gigabytes worth of topographic raster-maps of the Alps.
This was difficult to diagnose, because LibTIFF silently fell back to read() if the memory mapping failed, so there never was an explicit error before. Further, mapped memory is not accounted as working memory by Windows, so the Task-Manager was showing 67MB, when in fact nearly all virtual address space used up.
This blew up now because I added more TIF images to my database recently. LoadLibrary() started failing because it couldn't find any address space to put the new library. GetLastError() returned 8, which is ERROR_NOT_ENOUGH_MEMORY. That this happened within ATI's OpenGL library was just coincidence.
The solution was to pass "m" as flag to TiffOpen() to disable memory mapped IO.
Diagnosing this is easy with the Windows SysInternals tool VMMap (documentation link), which shows you how much of the virtual address space of a process is taken up by code/heap/stack/mapped files/shareable data/etc.
This should be the first thing to check if LoadLibrary() or CreateFileMapping() fails with ERROR_NOT_ENOUGH_MEMORY.

RuntimeWarning while using lambda function (Win8 computer)

I installed Python(x,y)-2.7.5.0 to run python programs on my Win8 Laptop. The Programs run on Linux, but when I use python(x,y) I get this error message:
D:\Python27\lib\site-packages\scipy\optimize\minpack.py:402: RuntimeWarning: Number of calls to function has reached maxfev = 2800.
warnings.warn(errors[info][0], RuntimeWarning)
The error occurs during a harmonic analysis at the "func= lambda..." part:
y = N.ravel(zon[:,z,k,:])
print k
func = lambda p,s,c,y: fitfunc(p,s,c) - y # Distance to the target function
print k
p1, success = optimize.leastsq(func, p0[:], args=(s,c,y))
I looked it up, where maxfev is defined, but I guess, it´s not a good idea to change it. My question is, whether the error is caused by a bug of python(x,y) or by my Windows 8 system. How can I search after the answer?
Does anybody else use Python(x,y)-2.7.5.0 with a win8 computer?
As suggested by Padraic Cunningham (python 64 package for windows), I installed Anaconda 64bit for windows. There are no problems anymore. Thus I think, one problem might have been the 32bit Version of pythonxy. Another aspect might be, that I didn´t use the latest version of pythonxy, but I´m not sure about that.

Same C code producing different results on Mac OS X than Windows and Linux

I'm working with an older version of OpenSSL, and I'm running into some behavior that has stumped me for days when trying to work with cross-platform code.
I have code that calls OpenSSL to sign something. My code is modeled after the code in ASN1_sign, which is found in a_sign.c in OpenSSL, which exhibits the same issues when I use it. Here is the relevant line of code (which is found and used exactly the same way in a_sign.c):
EVP_SignUpdate(&ctx,(unsigned char *)buf_in,inl);
ctx is a structure that OpenSSL uses, not relevant to this discussion
buf_in is a char* of the data that is to be signed
inl is the length of buf_in
EVP_SignUpdate can be called repeatedly in order to read in data to be signed before EVP_SignFinal is called to sign it.
Everything works fine when this code is used on Ubuntu and Windows 7, both of them produce the exact same signatures given the same inputs.
On OS X, if the size of inl is less than 64 (that is there are 64 bytes or less in buf_in), then it too produces the same signatures as Ubuntu and Windows. However, if the size of inl becomes greater than 64, it produces its own internally consistent signatures that differ from the other platforms. By internally consistent, I mean that the Mac will read the signatures and verify them as proper, while it will reject the signatures from Ubuntu and Windows, and vice versa.
I managed to fix this issue, and cause the same signatures to be created by changing that line above to the following, where it reads the buffer one byte at a time:
int input_it;
for(input_it = (int)buf_in; input_it < inl + (int)buf_in; intput_it++){
EVP_SIGNUpdate(&ctx, (unsigned char*) input_it, 1);
}
This causes OS X to reject its own signatures of data > 64 bytes as invalid, and I tracked down a similar line elsewhere for verifying signatures that needed to be broken up in an identical manner.
This fixes the signature creation and verification, but something is still going wrong, as I'm encountering other problems, and I really don't want to go traipsing (and modifying!) much deeper into OpenSSL.
Surely I'm doing something wrong, as I'm seeing the exact same issues when I use stock ASN1_sign. Is this an issue with the way that I compiled OpenSSL? For the life of me I can't figure it out. Can anyone educate me on what bone-headed mistake I must be making?
This is likely a bug in the MacOS implementation. I recommend you file a bug by sending the above text to the developers as described at http://www.openssl.org/support/faq.html#BUILD17
There are known issues with OpenSSL on the mac (you have to jump through a few hoops to ensure it links with the correct library instead of the system library). Did you compile it yourself? The PROBLEMS file in the distribution explains the details of the issue and suggests a few workarounds. (Or if you are running with shared libraries, double check that your DYLD_LIBRARY_PATH is correctly set). No guarantee, but this looks a likely place to start...
The most common issue porting Windows and Linux code around is default values of memory. I think Windows sets it to 0xDEADBEEF and Linux set's it to 0s.

Resources