I'm writing an email listner (inbox) using javamail and I would like to know if there is some method to increase the speed for saving attachments.
These are my tests:
using a small buffer (2k/4k)
using a big buffer (1mb)
increasing of java heap memory of jvm
all the previous test have the same peformance, it takes approximatively 6/7 minutes to save an attachment (pdf) of 7mb.
can you suggest me some more performant method to increase the speed?
Which protocol are you working with? IMAP? POP? IMAP over SSL?
Also, which server are you aiming? Gmail? And which platform are you running your listener on?
There is always the possibility that the servers imposes a limit (hence there is not much you could do).
If you are working with an SSL protocol, you should make sure you have proper security settings (for Unix/Linux platforms, see answer to JavaMail IMAP over SSL quite slow - Bulk fetching multiple messages).
Related
I was doing a project that needs to support a cluster of 30k nodes, all those nodes periodic calls the api to get data.
I want to have the maximum amount of concurrent get operation per second, and due to it is get operation, it must be in synced way.
And my local pc is 32GB 8Core, spring boot version is 2.6.6, configurations are like
server.tomcat.max-connections=10000
server.tomcat.threads.max=800
I use jmeter to do concurrent test, and the through out is around 1k/s, average response time is 2 seconds.
Is there any way to make it support more requests per second?
Hard to say without details on the web service, implementation of what it actually does and where the bottleneck actually is (threads, connections, CPU, memory or others) but, as a general recommendation, using non-blocking APIs would help but it should then be full non-blocking to actually make a real difference.
I mean that just adding Webflux and have blocking DB would not improve so much.
Furthermore, all improvements in execute time would help so check if you can improve the code and maybe have a look at trying to go native (which will come "built in" in Boot 3.X btw)
I have taken over an application that serves around 180 TPS. The responses are always SOAP XML responses with a size of around 24000 bytes. We have been told that we have a dynacache and i can see that we have a cachespec.xml. But I am unable to understand how many entries it holds currently and its max limit.
How can i check this? I have tried DynamicCacheAccessor.getDistributedMap().size() but this always returns 0.
We have a lot of data inconsistencies because of Java hashmap caching layers internally. What are your thoughts on increasing dynacache and eliminate the internal caching ? How much server memory might this consume ?
Thanks in advance
The DynamicCacheAccessor accesses the default servlet cache instance, baseCache. If size() always returns zero then your cachespec.xml is configured to use a different cache instance.
Look for a directive in the cachespec.xml:
<cache-instance name="cache_instance_name"></cache-instance> to determine what cache instance you are using.
Also install the Cache Monitor from the installableApps directory. See
Monitoring and
CacheMonitor. The Cache Monitor is an invaluable tool when developing/maintaining an app using servlet caching.
Using liberty, install the webCacheMonitor-1.0 feature.
I am looking at load testing of Progressive download video files with 100 user load. The testing tool I am looking at is Jmeter, Load Runner and NeoLoad. Though the script required for creating the load is very simpler, it consist of couple of request and it is able to make the connection with server and start the downloading of the file. Though I understand that the progressive technology is pretty old, but still it is used in many website. The question I have is around the strategy.
Do we need to download the complete file(i.e. 1.3 GB in my case)?
Even we looked at saving the response as file, the resources such as Network and disk I/O are at max? Does this strategy suits here?
Can we have some another strategy where we can engage the server for the duration and test for issues underlying with connection issues and transmission speed?
Depending on your use case, there is Seeking feature so theoretically you should be able to specify start offset so you will not have to get the whole file. Also you can consider using HTTP Header Manager to send Range header
If your target is to verify that the file has been downloaded fully and it is not broken you can tick "Save Response as MD5 Hash" box on "Advanced" tab of the HTTP Request sampler - this way you will save at least 130 GB of disk space. MD5 checksum can be verified using i.e. MD5Hex Assertion
The main idea of the load testing is simulating real application usage with 100% accuracy. Not knowing the requirements of your product it is impossible to come up with suggestions, however JMeter can be configured to behave pretty much like real browser does so it is a viable option.
See Load Testing Video Streaming with JMeter: Learn How article for more information if needed.
I made a few test downloads using the Jetty 9 server, where it is made multiple downloads of a single file with an approximate size of 80 MB. When smaller number of downloads and the time of 55 seconds is not reached to download, all usually end, however if any downloads in progress after 55 seconds the flow of the network simply to download and no more remains.
I tried already set the timeout and the buffer Jetty, though this has not worked. Has anyone had this problem or have any suggestions on how to solve? Tests on IIS and Apache Server work very well. Use JMeter for testing.
Marcus, maybe you are just hitting Jetty bug 472621?
Edit: The mentioned bug is a separate timeout in Jetty that applies to the total operation, not just idle time. So by setting the http.timeout property you essentially define a maximum time any download is allowed to take, which in turn may cause timeout errors for slow clients and/or large downloads.
Cheers,
momo
A timeout means your client isn't reading fast enough.
JMeter isn't reading the response data fast enough, so the connection sits idle long enough that it idle times out and disconnects.
We test with 800MB and 2GB files regularly.
On using HTTP/1.0, HTTP/1.1, and HTTP/2 protocols.
Using normal (plaintext) connections, and secured TLS connections.
With responses being delivered in as many Transfer-Encodings and Content-Encodings as we can think of (compressed, gzip, chunked, ranged, etc.).
We do all of these tests using our own test infrastructure, often spinning up many many Amazon EC2 nodes to perform a load test that can sufficiently test the server demands (a typical test is 20 client nodes to 1 server node)
When testing large responses, you'll need to be aware of the protocol (HTTP/1.x vs HTTP/2) and how persistence behavior of that protocol can change the request / response latency. In the real world you wont have multiple large requests after each other on the same persisted connection via HTTP/1 (on HTTP/2 the multiple requests would be parallel and be sent at the same time).
Be sure you setup your JMeter to use HTTP/1.1 and not use persisted connections. (see JMeter documentation for help on that)
Also be aware of your bandwidth for your testing, its very common to blame a server (any server) for not performing fast enough, when the test itself is sloppily setup and has expectations that far exceed the bandwidth of the network itself.
Next, don't test with the same machine, this sort of load test would need multiple machines (1 for the server, and 4+ for the client)
Lastly, when load testing, you'll want to become intimately aware of your networking configurations on your server (and to a lesser extent, your client test machines) to maximize your network configuration for high load. Default configurations for OS's are rarely sufficient to handle proper load testing.
after an entire day of searches, I would to talk about the best solution for an online chat.
This is what I know:
Ajax poll is the old, bandwith consuming, and not scalable way of doing it. It makes a request for new data to the server every X seconds. This implies one database query every X seconds * number_of_connected_users.
Reverse Ajax and one of its application (comet) requires a customized web-server or a dedicated comet server which can handle the number_of_connected_users amount of long-time http connections.
My actual server is: 1 Xeon CPU, 1 GB of RAM and 1 Gb/s of bandwith. The server is a virtual machine (hence highly scalable).
I need a solution that could scale with the server and the future growing user base.
My doubts are:
How much the ajax polling method can impact my bandwith usage?
In which way can I optimize the ajax polling to make a db query only when necessary?
Can the comet server be run in the same machine of the web-server (Apache)?
With the comet way, I still need an interval to do the queries on the database and then send the response, so where is the real-time?
With my actual server, can the comet way work?
Thank you in advance.
You should never use polling if you can get away with it. It clogs up resources on both the server and client. The server must make more database requests with polling, more checks to see if data has changed.
The ajax polling method also generates more unneccessary requests. With polling, you use memory and CPU. Comet (if it's done properly) uses only memory.
The comet server can probably not run under Apache. Apache does not seem to be designed for long running requests. I'd recommend implementing your comet server in ruby (using EventMachine) an example, in Python (using Twisted), or in C.
I don't see why you need to have an interval to do database queries. When you make a change, you can just tell your comet server to notify the neccessary users of the change.
I'm writing my website in PHP.
So, I need to run a server (like twisted) and write my chat application in python?
This app should take care of incoming ajax request and push new data to clients.
If I understand well, this approach doesn't need a database, right?