HTTP content length exceeded 4194304 bytes - little-proxy

While testing I observed that media attachments more than 4194304 bytes are rejected by the proxy. It throws the message - "HTTP content length exceeded 4194304 bytes".
Is this the functionality of LittleProxy implementation or there is any configuration which will allow bigger size of attachments than this?

This sounds like a TooLongFrameException thrown by HttpObjectAggregator.
The only way that I think this could happen is when using an HttpFiltersSource that specifies a non-zero value from getMaximumRequestBufferSizeInBytes() or getMaximumResponseBufferSizeInBytes(). You can increase those, but that increases your memory usage. If the filter can be rewritten so as to work with frames (chunks) as they stream through, then you can set the buffer size to 0 and dramatically reduce your memory consumption.

Related

The memory limit setting may be not respected by cobalt

From the cobalt memory docs , we set max_cobalt_cpu_usage limit to 250M, max_cobalt_gpu_usage to 150M, and found the actual max memory used by cobalt is about 370M when play the 4K videos for about 12 hours, it exceeded 250M a lot, so what kind of memory(eg. malloc/new/mmap etc.) is counted into max_cobalt_cpu_usage? How to set the max_cobalt_cpu_usage and keep the actual memory usage within the max_cobalt_cpu_usage?
PS.The memory statistical approach is as follows:
1>Move focus to the cobalt icon, drop the memory cache, record the memory free value(memoryBegin) from /proc/meminfo;
2>Enter cobalt youtube, and keep playing 4K videos for about 12 hours;
3>During playing the videos, record the memory free value(memoryEnd) from /proc/meminfo every 10s(drop the memory cache before record the memory);
4>Find the minimum memoryEnd as memoryEndMin;
5>memoryBegin - memoryEndMin = 370M;
what kind of memory(eg. malloc/new/mmap etc.) is counted into max_cobalt_cpu_usage?
The value returned from SbSystemGetUsedCPUMemory() is used to determine the current cpu memory usage. Please ensure that this returns the expected values.
Note that the cache's won't dynamically re-size to keep within the limits. The initial conditions must be set to account for the maximum memory limit that is required to be set.
You can manually reduce the sizes of your caches (see memory_tuning.md document) or you can use --reduce_cpu_memory_by=Xmb which will allow the AutoSet memory settings to reduce their memory usage.
For more information about memory use, use --memory_tracker as a command line argument.

what is line sequential buffer length in informatica?

I need explanation for below questions.
what is line sequential buffer length in informatica?
how integration service handles when allocated buffer is full?
Line sequential buffer length in informatica is a property of session which specifies the accepted length of bytes from an individual record of a flat file source. The default size is 1024 as can be seen in the attached screenshot.
To improve the performance of a session generally the size is decreased.
When the allocated buffer is full, the integration service stops the execution of the session and logs the error message in the session logs.

Java TLS max record size

How do we configure TLS max record size in JSSE (with SunJCE provider) with JDK 1.8? Is the TLS record size hardcoded to 16K bytes. We care a lot about latency in inter-service calls and want to experiment with smaller TLS record size.
There are a lot of articles on TLS record size and how a large size may be detrimental (e.g., http://chimera.labs.oreilly.com/books/1230000000545/ch04.html#TLS_RECORD_SIZE)
Thanks,
Arvind
It is not hardcoded: it depends on the cipher suite in use; but it's in the vicinity of 16k.
SSLSocket doesn't do any buffering, so you can control the maximum size actually used via a BufferedOutputStream constructed with the buffer size paramater. For example, the default of 8k will map to whatever the cipher suite needs in terms of ciphertext length, but it will be less than 16k. You would have to experiment a bit to find the size needed for the target record size.

Netty TrafficCounter

I am currently using the io.netty.handler.traffic.ChannelTrafficShapingHandler & io.netty.handler.traffic.TrafficCounter to measure performance across a netty client and server. I am consistently see a discrepancy for the value Current Write on the server and Current Read on the client. How can I account for this difference considering the Write/Read KB/s are close to matching all the time.
2014-10-28 16:57:50,099 [Timer-4] INFO PerfLogging 130 - Netty Traffic stats TrafficShaping with Write Limit: 0 Read Limit: 0 and Counter: Monitor ChannelTC431885482 Current Speed Read: 3049 KB/s, Write: 0 KB/s Current Read: 90847 KB Current Write: 0 KB
2014-10-28 16:57:42,230 [ServerStreamingLogging] DEBUG c.f.s.r.l.ServerStreamingLogger:115 - Traffic Statistics WKS226-39843-MTY6NDU6NTAvMDAwMDAw TrafficShaping with Write Limit: 0 Read Limit: 0 and Counter: Monitor ChannelTC385810078 Current Speed Read: 0 KB/s, Write: 3049 KB/s Current Read: 0 KB Current Write: 66837 KB
Is there some sort of compression between client and server?
I can see that my client side value is approximately 3049 * 30 = 91470KB where 30 is the number of seconds where the cumulative figure is calculated
Scott is right, there are some fix around that are also taken this into consideration.
Some explaination:
read is actually real read bandwidth and read bytes account (since the system is not the origin of read reception)
for write events, the system is the source of them and managed them, so there are 2 kinds of writes (and will be in the next fix):
proposed writes which are not yet sent but before the fix taken into account in the bandwidth (lastWriteThroughput) and in the current write (currentWrittenBytes)
real writes when they are effectively pushed to the wire
Currently the issue is that currentWrittenBytes could be higher than real writes since they are mostly scheduled in the future, so they depend on the write speed from the handler which is the source of the write events.
After the fix, we will be more precise on what is "proposed/scheduled" and what is really "sent":
proposed writes taken into consideration into lastWriteThroughput and currentWrittenBytes
real writes operations taken into consideration into realWriteThroughput and realWrittenBytes when the writes occur on the wire (at least on the pipeline)
Now there is a second element, if you set the checkInterval to 30s, this implies the following:
the bandwidth (global average and so control of the traffic) is computed according to those 30s (read or write)
every 30s the "small" counters are reset to 0, while the cumulative counters are not: if you use cumulative counters, you should see that bytes received/sent should be almost the same, while every 30s the "small" counters (currentXxxx) are reset to 0
The smaller the value of this checkInterval, the better the bandwidth, but not too small to prevent too frequent reset and too many thread activities on bandwidth computations. In general, a default of 1s is quite efficient.
The difference seen could be for instance because the 30s event of the sender is not "synchronized" with 30s event of the receiver (and shall not be). So according to your numbers: when receiver (read) is resetting its counters with the 30s event, the writer will resetting its own counters 8s later (24 010 KB).

memory_limit=80M. what is the maximum image size for imagecreateformjpeg()?

i have a webhosting that gives maximum memory_limit of 80M (i.e. ini_set("memory_limit","80M");).
I'm using photo upload that uses the function imagecreatefromjpeg();
When i upload large images it gives the error
"Fatal error: Allowed memory size of 83886080 bytes exhausted"
What maximum size (in bytes) for the image i can restrict to the users?
or the memory_limit depends on some other factor?
The memory size of 8388608 is 8 Megabytes, not 80. You may want to check whether you can still increase the value somewhere.
Other than that, the general rule for image manipulation is that it will take at least
image width x image height x 3
bytes of memory to load or create an image. (One byte for red, one byte for green, one byte for blue, possibly one more for alpha transparency)
By that rule, a 640 x 480 pixel image will need at least 9.2 Megabytes of space - not including overhead and space occupied by the script itself.
It's impossible to determine a limit on the JPG file size in bytes because JPG is a compressed format with variable compression rates. You will need to go by image resolution and set a limit on that.
If you don't have that much memory available, you may want to look into more efficient methods of doing what you want, e.g. tiling (processing one part of an image at a time) or, if your provider allows it, using an external tool like ImageMagick (which consumes memory as well, but outside the PHP script's memory limit).
Probably your script uses more memory than the just the image itself. Trying debugging your memory consumption.
One quick-and-dirty way is to utilize memory_get_usage and memory_get_usage memory_get_peak_usage on certain points in your code and especially in a custom error_handler and shutdown_function. This can let you know what exact operations causes the memory exhaustion.

Resources