Can any one please let me know will jmeter support performance testing for file uploads more than 10gb files. The way the files are uploading is through chunking in JAVA. I cannot do the file upload for more than 10 GB because int allows max size of 2^31. In the http sampler i am declaring the file size as one one chunk
for eg: file size is 444,641,856 bytes, I am declaring the whole in one chunk instead of dividing it into chunks of 5mb each.
The developers are not willing to change the code and also if I give the result using one chunk size its not a valid performance test.
Can anyone suggest will jmeter allowing chunking mechanism ..... and also is there a solution for file uploading for more than 10Gb Files
Theoretically JMeter doesn't have 2GB limitation (especially HTTPClient implementations) so given you configured it properly you shouldn't face errors.
However if you don't have as much RAM as 10GB x number of virtual users you might want to try HTTP Raw Request sampler available via JMeter Plugins.
References:
https://groups.google.com/forum/#!topic/jmeter-plugins/VDqXDNDCr6w%5B1-25%5D
http://jmeter.512774.n5.nabble.com/fileupload-test-with-JMeter-td4267154.html
Related
I want to download a file that is around 60GB in size.
My internet speed is 100mbps but download speed is not utilizing my entire bandwidth.
If I use aria2c to download this single file, I can utilize increased "connections per server"? It seems aria2c lets me use 16 max connections. Would this option even work for downloading a single file?
The way I'm visualizing how the download goes is like 1 connection tries to download from 1 sector of the file, while the other connection tries to download from a different sector. And basically, the optimal number of concurrent download is until the host bandwidth limit is reached (mine being 100mbps). And when the two connections collide in the sectors they are downloading, then aria2c would immediately see that that specific sector is already downloaded and skips to a different sector. Is this how it plays out when using multiple connections for a single file?
Is this how it plays out when using multiple connections for a single
file?
HTTP standard provide Range request header, which allow to say for example: I want part of file, starting at byte X and ending at byte Y. If server do support this gimmick then it respond with 206 Partial Content. Thus knowing length (size) of file (see Content-Length) it is possible to lay parts so they are disjoint and cover whole file.
Beware that not all servers support this gimmick. You need to check if server hosting file you want to download do so. This can be done using HEAD request, HTTP range requests provides example using curl
curl -I http://i.imgur.com/z4d4kWk.jpg
HTTP/1.1 200 OK
...
Accept-Ranges: bytes
Content-Length: 146515
If you have bytes in Accept-Ranges this mean that server does have support. If you wish you might any other tool able to send HEAD request and provide to you response headers.
I am working on load testing Video streaming, I have observed that when we execute Jmeter to download any video files it gets downloaded in the Heap memory and many times it does not releases the heap memory, this causes JVM out of memory issues.
I have also observed that when we select option “Save as MD5 Hash option”, proper GC cycle kicks in and Jmeter does through JVM out of memory error.
Could you please help me in knowing:
How does Jmeter handles the object?
When does it releases the object? and
When Save as MD5 option is selected what difference it makes during execution and releasing the option?
The difference is that if you tick Save as MD5 JMeter stores only MD5 hash of the response which is a relatively short string while in the opposite case JMeter stores the whole response in memory so the options are in:
Use MD5 hashes in combination with MD5Hex Assertion if you need to check content integrity
Go for distributed testing, by default JMeter remote engines do not store response data so it will be discarded (
Increase JVM Heap space allocated to JMeter so the responses will fit
Manually discard response data using JSR223 Listener and code like
prev.setResponseData('dummy','UTF-8')
I am running a stability test (60hrs) in Jmeter. I have several graphs in the test plan to capture system resources like cpu, threads, heap.
The size of View_Results_Tree.xml file is 9GB after 24hrs. I am afraid if jmeter will sustain for 60hrs.
Is there size limit for View_Results_Tree.xml or results folder size in Jmeter?
What are the best practices to be followed in Jmeter before running such long tests? I am looking for recommended config/properties for such long tests.
Thanks
Veera.
There is no results file limit as long as it fits into your hard drive for storing or in your RAM to open and analyze.
The general recommendations are:
Use CSV format instead of XML
Store only those metrics which you need to store, saving unnecessary stuff causes massive memory and disk IO overheads.
If you look into jmeter.properties file (located in JMeter's "bin" folder) for properties which names start with jmeter.save.saveservice i.e.
#jmeter.save.saveservice.output_format=csv
#jmeter.save.saveservice.assertion_results_failure_message=true
#etc.
Copy them all to user.properties file, set "interesting" properties to true and others to false - that will allow to save a lot of disk space and release valuable resources for the load testing itself.
See 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure for more detailed explanation of the above recommendations and few more JMeter performance tuning tweaks.
There are not limits on file size in JMeter, the limit is your your disk space.
From the file name, I guess you chose XML output, it is better to choose CSV output (see below another reason for that).
Besides, ensure you're not using GUI for load testing in JMeter which is a bad practice, this will certainly break your test if you do it.
Switch to Non GUI mode and ensure you follow those recommendations.
/bin/jmeter -t -n -l /results.csv
Since JMeter 3.0, you can even generate report at end of load test of from an existing CSV (JTL, not from XML format) file, see:
http://jmeter.apache.org/usermanual/generating-dashboard.html
As you need GUI for monitoring, run Jmeter in GUI mode only for the monitoring part.
I agree with the answer provided by UBIK LOAD PACK. But in case if you need to have the results stored somewhere you don't need to worry about the size of the file. The best solution is using Grafana and Graphite (InfluxDB) for realtime reports and efficient storage.
My use case requires transcoding a remote MOV file that can’t be stored locally. I was hoping to use http protocol to stream the file into ffmpeg. This works, but I’m observing this to be a very expensive operation with (seemingly) redundant network traffic, so am looking for suggestions.
What I see is that ffmpeg starts out with a Range request “0-“ (which brings in the entire file), followed by a number of open-ended requests (no ending offset) at different positions, each of which makes the http server return large chunks of the file again and again, from the starting position to the very end.
For example, http range requests for a short 10MB file look like this:
bytes=0-
bytes=10947419-
bytes=36-
bytes=3153008-
bytes=5876422-
Is there another input method that would be more network-efficient for my use case? I control the server where the video file resides, so I’m flexible in what code runs there.
Any help is greatly appreciated
This is a follow-up to knt's question about PUT vs POST, with more details. The answer may be independently more useful to future answer-seekers.
can I use PUT instead of POST for uploading using fineuploader?
We have a mostly S3-compatible back-end that supports multipart upload, but not form POST, specifically policy signing. I see in the v5 migration notes that "even if chunking is enabled, a chunked upload request is only sent to traditional endpoints if the associated file must be broken into more than 1 chunk". How is the threshold determined for whether a file needs to be chunked? How can the threshold be adjusted? (or ideally, set to zero)
Thanks,
Fine Uploader will chunk a file if its size is less than the number of bytes specified in the chunking.partSize option (default value is: 2000000 bytes). If your file is smaller than the size specified in that value, then it will not be chunked.
To effectively set it to "zero", you could just increase the partSize to an extremely large value. I also did some experimenting, and it seems like a partSize of -1 will make Fine Uploader NOT chunk files at all. AFAIK, that is not supported behavior, and I have not looked at why that is even possible.
Note that S3 requires chunks to be a minimum of 5MB.
Also, note that you may run into limitations on request size as imposed by certain browsers if you make partSize extremely large.