Mystery issue with GIF upload? - gd

Alright, so whenever I upload this GIF to my board (NGINX+PHP-FPM) I get a slow down until an eventual 504 Gateway Time-out, alright, so I know what you're thinking, "go ahead and fix those nginx.conf and php-fpm settings", well I tweaked them to near perfection last night, my server is running brilliantly now. However, that one particular GIF still screws up, runs php-FPM to almost 100% (I have a great top of the line quad core processor in my server, my server is by no means primitive).
So want to know where it gets weirder? I've uploaded 10MB GIF's with bigger dimensions than the one in case (the one which is causing the issues is about 600KB) and had the server process them ridiculously quickly.
Alright! So let's get into the log, error_log doesn't output anything in regards to this issue. So I went ahlead and set up a slowlog within the php-FPM config.
Here's the issue:
[02-Oct-2011 05:54:17] [pool www] pid 76004
script_filename = /usr/local/www/mydomain/post.php
[0x0000000805afc6d8] imagefill() /usr/local/www/mydomain/inc/post.php:159
[0x0000000805afb908] fastImageCopyResampled() /usr/local/www/mydomain/inc/post.php:107
[0x0000000805af4240] createThumbnail() /usr/local/www/mydomain/classes/upload.php:182
[0x0000000805aeb058] HandleUpload() /usr/local/www/mydomain/post.php:235
Okay, let's look at post.php (line 159 in bold):
if (preg_match("/png/", $system[0]) || preg_match("/gif/", $system[0])) {
$colorcount = imagecolorstotal($src_image);
if ($colorcount <= 256 && $colorcount != 0) {
imagetruecolortopalette($dst_image,true,$colorcount);
imagepalettecopy($dst_image,$src_image);
$transparentcolor = imagecolortransparent($src_image);
**imagefill($dst_image,0,0,$transparentcolor);**
imagecolortransparent($dst_image,$transparentcolor);
}
Line 107:
fastImageCopyResampled($dst_img, $src_img, 0, 0, 0, 0, $thumb_w, $thumb_h, $old_x, $old_y, $system);
upload.php, line 182 (in bold):
**if (!createThumbnail($this->file_location, $this->file_thumb_location, KU_REPLYTHUMBWIDTH, KU_REPLYTHUMBHEIGHT))** { exitWithErrorPage(_gettext('Could not create thumbnail.'));
(note, that error does not show up)
The other post.php (line 235):
$upload_class->HandleUpload();
So what can I do? How can I fix this? I know this is a tough issue, but if you guys could give me any input, it would be greatly appreciated.
Oh and in case anyone is curious, here's the GIF: http://i.imgur.com/rmvau.gif

Have you tried setting the client_body_buffer_size directive in your nginx configs?
See more here: http://www.lifelinux.com/how-to-optimize-nginx-for-maximum-performance/

Related

Heroku need help handling bloat in database and vacuuming

Today, I started getting timeout errors from heroku. I eventually ran this ...
heroku pg:diagnose -a myapp
and got ...
RED: Bloat
Type Object Bloat Waste
───── ───────────────────────────────────────────── ───── ───────
table public.files 776 1326 MB
index public.files::files__lft__rgt_parent_id_index 63 106 MB
RED: Hit Rate
Name Ratio
────────────────────── ──────────────────
overall cache hit rate 0.8246404842342929
public.files 0.8508127886460272
I ran the VACUUM command and it did nothing to address the bloat. How do I address this?
I know this is an old answer. For anyone else facing the same issue, he might try the following for vacuum:
VACUUM (ANALYZE, VERBOSE, FULL) your-table-name;

ReadFile !=0, lpNumberOfBytesRead=0 but offset is not at the end of the file

We're struggling to understand the source of the following bug:
We have a call to "ReadFile" (Synchronous) that returns a non-zero value (success) but fills the lpNumberOfBytesRead parameter to 0. In theory, that indicates that the offset is outside the file but, in practice, that is not true. GetLastError returns ERROR_SUCCESS(0).
The files in question are all on a shared network drive (Windows server 2016 + DFS, windows 8-10 clients, SMBv3). The files are used in shared mode. In-file locking (lockFileEx) is used to handle concurrent file access (we're just locking the first byte of the file before any read/write).
The handle used is not fresh: it isn't created locally in the functions but retrieved from a application-wide "file handle cache manager". This means that it could have been created (unused) some times ago. However, everything we did indicates the handle is valid at the moment of the call: GetLastError returns 0, GetFileInformationByHandle returns "true" and a valid structure.
The error is logged to a file that is located on the same file server as the problematic files.
We have done a lot of logging and testing around this issue. here are the additional facts we gathered:
Most (but not all) of the problematic read happen at the very tail of the file: we're reading the last record. However, the read is still within the file GetlastError does not return ERROR_HANDLE_EOF. If the program is restarted, the same read with the same parameters works.
The issue is not temporary: repeated calls yield the same result even if we let the program loop indefinitely. Restarting the program, however, does not automatically leads to the issue being raised again immediately.
We are sure the offset if inside the file: we check the actual file pointer location after the failure and compare it with the expected value as well as the size of the file as reported by the OS: all matches across multiple retries.
The issue only shows up randomly: there is no real pattern to the program working as expected and the program failing. It occurs a 2-4 times a day in our office (about 20 people).
The issue does not only occurs in our network. we've seen the symptoms and the log entries in multiple locations although we have no clear view of the OS involved in these cases.
We just deployed a new version of the program that will attempt to re-open the file in case of failure but that is a workaround, not a fix: we need to understand what is happening here and I must admit that I found no rational explanation for it
Any suggestion about what could be the cause of this error or what other steps could be taken to find out will be welcome.
Edit 2
(In the light of keeping this clear, I removed the code: the new evidence gives a better explanation of the issue)
We managed to get a procmon trace while the problem was happening and we got the following sequence of events that we simply cannot explain:
Text version:
"Time of Day","Process Name","PID","Operation","Path","Result","Detail","Command Line"
"9:43:24.8243833 AM","wacprep.exe","33664","ReadFile","\\office.git.ch\dfs\Data\EURDATA\GIT18\JNLS.DTA","END OF FILE","Offset: 7'091'712, Length: 384, Priority: Normal","O:\WinEUR\wacprep.exe /company:GIT18"
"9:43:24.8244011 AM","wacprep.exe","33664","QueryStandardInformationFile","\\office.git.ch\dfs\Data\EURDATA\GIT18\JNLS.DTA","SUCCESS","AllocationSize: 7'094'272, EndOfFile: 7'092'864, NumberOfLinks: 1, DeletePending: False, Directory: False","O:\WinEUR\wacprep.exe /company:GIT18"
(there are thousands of these logged since the application is in an infinite loop.)
As we understand this, the ReadFile call should succeed: the offset is well within the boundary of the file. Yet, it fails. ProcMon reports END OF FILEalthough I suspect it's just because ReadFile returned != 0 and reported 0 bytes read.
While the loop was running, we managed to unblock it by increasing the size of the file from a different machine:
"Time of Day","Process Name","PID","Operation","Path","Result","Detail","Command Line"
"9:46:58.6204637 AM","wacprep.exe","33664","ReadFile","\\office.git.ch\dfs\Data\EURDATA\GIT18\JNLS.DTA","END OF FILE","Offset: 7'091'712, Length: 384, Priority: Normal","O:\WinEUR\wacprep.exe /company:GIT18"
"9:46:58.6204810 AM","wacprep.exe","33664","QueryStandardInformationFile","\\office.git.ch\dfs\Data\EURDATA\GIT18\JNLS.DTA","SUCCESS","AllocationSize: 7'094'272, EndOfFile: 7'092'864, NumberOfLinks: 1, DeletePending: False, Directory: False","O:\WinEUR\wacprep.exe /company:GIT18"
"9:46:58.7270730 AM","wacprep.exe","33664","ReadFile","\\office.git.ch\dfs\Data\EURDATA\GIT18\JNLS.DTA","SUCCESS","Offset: 7'091'712, Length: 384, Priority: Normal","O:\WinEUR\wacprep.exe /company:GIT18"

Is it possible to get the width and height of a .gif file in scala? [duplicate]

I am able to read png file. But getting ArrayIndexOutOfBoundsException: 4096 while reading gif file.
byte[] fileData = imageFile.getFileData();
ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(fileData);
RenderedImage image = ImageIO.read(byteArrayInputStream)
Exception thrown looks like
java.lang.ArrayIndexOutOfBoundsException: 4096
at com.sun.imageio.plugins.gif.GIFImageReader.read(Unknown Source)
at javax.imageio.ImageIO.read(Unknown Source)
at javax.imageio.ImageIO.read(Unknown Source)
what could be the issue and what is the resolution?
Update 3: Solution
I ended up developing my own GifDecoder and released it as open source under the Apache License 2.0. You can get it from here: https://github.com/DhyanB/Open-Imaging. It does not suffer from the ArrayIndexOutOfBoundsException issue and delivers decent performance.
Any feedback is highly appreciated. In particular, I'd like to know if it works correctly for all of your images and if you are happy with its speed.
I hope this is helpful to you (:
Initial answer
Maybe this bug report is related to or describes the same problem: https://bugs.openjdk.java.net/browse/JDK-7132728.
Quote:
FULL PRODUCT VERSION :
java version "1.7.0_02"
Java(TM) SE Runtime Environment (build 1.7.0_02-b13)
Java HotSpot(TM) 64-Bit Server VM (build 22.0-b10, mixed mode)
ADDITIONAL OS VERSION INFORMATION :
Microsoft Windows [Version 6.1.7601]
A DESCRIPTION OF THE PROBLEM :
according to specification
http://www.w3.org/Graphics/GIF/spec-gif89a.txt
> There is not a requirement to send a clear code when the string table is full.
However, GIFImageReader requires the clear code when the string table is full.
GIFImageReader violates the specification, clearly.
In the real world, sometimes people finds such high compressed gif image.
so you should fix this bug.
STEPS TO FOLLOW TO REPRODUCE THE PROBLEM :
javac -cp .;PATH_TO_COMMONS_CODEC GIF_OverflowStringList_Test.java
java -cp .;PATH_TO_COMMONS_CODEC GIF_OverflowStringList_Test
EXPECTED VERSUS ACTUAL BEHAVIOR :
EXPECTED -
complete normally. no output
ACTUAL -
ArrayIndexOutOfBounds occurs.
ERROR MESSAGES/STACK TRACES THAT OCCUR :
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 4096
at com.sun.imageio.plugins.gif.GIFImageReader.read(GIFImageReader.java:1
075)
at javax.imageio.ImageIO.read(ImageIO.java:1400)
at javax.imageio.ImageIO.read(ImageIO.java:1322)
at GIF_OverflowStringList_Test.main(GIF_OverflowStringList_Test.java:8)
REPRODUCIBILITY :
This bug can be reproduced always.
The bug report also provides code to reproduce the bug.
Update 1
And here is an image that causes the bug in my own code:
Update 2
I tried to read the same image using Apache Commons Imaging, which led to the following exception:
java.io.IOException: AddStringToTable: codes: 4096 code_size: 12
at org.apache.commons.imaging.common.mylzw.MyLzwDecompressor.addStringToTable(MyLzwDecompressor.java:112)
at org.apache.commons.imaging.common.mylzw.MyLzwDecompressor.decompress(MyLzwDecompressor.java:168)
at org.apache.commons.imaging.formats.gif.GifImageParser.readImageDescriptor(GifImageParser.java:388)
at org.apache.commons.imaging.formats.gif.GifImageParser.readBlocks(GifImageParser.java:251)
at org.apache.commons.imaging.formats.gif.GifImageParser.readFile(GifImageParser.java:455)
at org.apache.commons.imaging.formats.gif.GifImageParser.readFile(GifImageParser.java:435)
at org.apache.commons.imaging.formats.gif.GifImageParser.getBufferedImage(GifImageParser.java:646)
at org.apache.commons.imaging.Imaging.getBufferedImage(Imaging.java:1378)
at org.apache.commons.imaging.Imaging.getBufferedImage(Imaging.java:1292)
That looks very similar to the problem we have with ImageIO, so I reported the bug at the Apache Commons JIRA: https://issues.apache.org/jira/browse/IMAGING-130.
I encountered the exact same problem you did, but I had to stick to an ImageIO interface, which no other library did. Apart from Jack's great answer, I simply patched the existing GIFImageReader class with a few lines of code, and got it marginally working.
Copy this link into PatchedGIFImageReader.java and use as such:
reader = new PatchedGIFImageReader(null);
reader.setInput(ImageIO.createImageInputStream(new FileInputStream(files[i])));
int ub = reader.getNumImages(true);
for (int x=0;x<ub;x++) {
BufferedImage img = reader.read(x);
//Do whatever with the new img bufferedimage
Be sure to change the package name to whatever you're using.
Unfortunately results may vary, as the patch was a 1 minute bugfix that basically just exits the loop if it goes past the buffer. Some gifs it loads fine, others have a few visual artifacts.
Such is life. If anyone knows a better fix instead of mine, please do tell.

How to upload large size image by Intervention Image in Laravel 5

I'm using Image Intervention in my project.
My application working smoothly while uploading small size images. But when I try to upload large size image(>2mb), my application stops working!
Even It shows no proper errors. Sometimes It shows Token mismatch error & sometimes the url not redirects.
How to fix it? I've no idea.
Here is my code:
$post->new Post();
if($request->hasFile('image')){
$image=$request->file('image');
$filename=Auth::user()->id.'_'.time().'.'.$image->getClientOriginalExtension();
$location=public_path('images/'.$filename);
Image::make($image)->save($location);
$post->image=$filename;
}
$post->save();
I'm using Image intervention for uploading images. But you can suggest alternative of it as well.
Thanks!
Actually this is the issue from server side setting variable values into php.ini file. if you upload more then your server's post_max_size setting the input will be empty, you will get Token mismatch error.
change upload_max_filesize , post_max_size value as per you required and restart the server.
It turns out this is a memory issue. If you check the error log you with see that the server ran out of memory. You will see something like
PHP Fatal error: Allowed memory size of XXXXXXXX bytes exhausted (tried to allocate XXXXX bytes) in ...
Because Intervention Image reads the whole image pixel by pixel keeping the data in memory, seemingly small images like 2MB can end up requiring dozens of MB of memory to process.
You may need to set your memory limit to the highest available and check the file size before it is opened because a site that breaks without error messages is embarrassing. Use something like
if( $request->hasFile('image') && $request->file('image')->getClientSize() < 2097152 ){
$image=$request->file('image');
$filename=Auth::user()->id.'_'.time().'.'.$image->getClientOriginalExtension();
$location=public_path('images/'.$filename);
Image::make($image)->save($location);
$post->image=$filename;
}

C++ CreateFile and fopen functions preemptively reading entire remote file

While doing performance analysis on some source code, I noticed both CreateFile and fopen to be taking an unusually long time to complete on remote files.
Digging in further with wireshark, I discovered that when either of the two functions are used to open a file for reading, the entire contents of the file (up to approximately 4MB) are being read. Let me also note that neither function returns until the SMB2 read operations are completed (which takes up approximately 99% of the elapsed call time).
Is there anyway to prevent this behavior? Can anyone explain what's going here?
..
..
Example:
HANDLE h = ::CreateFile( "\\\\Server1\\Data0\\CRUISE_DATA.bin", GENERIC_READ, FILE_SHARE_READ|FILE_SHARE_WRITE, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_READONLY, NULL );
From Wireshark:
SMB2 426 Create Request File: Jim\Data0\CRUISE_DATA.bin
SMB2 386 Create Response File: Jim\Data0\CRUISE_DATA.bin
SMB2 171 Read Request Len:65536 Off:0 File:Jim\Data0\ CRUISE_DATA.bin
SMB2 1434 Read Response
...
...
SMB2 171 Read Request Len:65536 Off:3735552 File: Jim\Data0\CRUISE_DATA.bin
SMB2 1434 Read Response
It was definitely scanning for something... virus scanning actually. Once I turned off my virus scanner and repeated the test, the behavior went away. Apparently the real-time protection scans every file as it is opened within a given process. The least it could have done is updated the local cache ;)
This issue has stumped us for a while now. It figures that the day after I post the question, the answer falls in our laps. Anyway, I hope this helps someone else.
You could try some of the CreateFile flag options: FILE_FLAG_RANDOM_ACCESS and/or FILE_FLAG_OPEN_NO_RECALL. FILE_FLAG_NO_BUFFERING might also help but it requires sector aligned I/O.
Newer versions of Visual Studio maps R in the fopen mode string to FILE_FLAG_RANDOM_ACCESS.

Resources