Maximum number of Http Session in Tomcat - performance

How many maximum numbers of active HTTP sessions (not the concurrent) tomcat 8 can handle? Tomcat 8 hosted on Linux, have only one web app containing the REST services. I have observed around 50 K active HTTP sessions during the day. I had used psi-probe to view them. The session timeout is 30 minutes. Each Http Session holds 1 MB of data. so 50 GB of session data. Heap is 450 GB, as required by the product - holds the multidimensional cube in the memory.
does this lead to any performance problems? Because I have observed frequent GCs and many Stop the Worlds ( more than 5 seconds ) 10-15 times a day.

Related

How I can estimate maximum number of requests per second for J-meter with 8 users

Scenario is
Total Number of Users 50000
Ramp up time = 2 Minutes
Test Duration = 5 Minutes
while I've login credentials of 8 users
So, Please guide that how I can send 50000 requests in 2 minutes with 8 users
Request per Second (RPS) is a result of your load test execution. You cannot estimate it beforehand.
Typically, you have a number in mind like ex. 15rps based on your application history or the research you might have done. While you do load test, you assert if actual rps >= expected rps. Accordingly, you can work report your findings to business team / development team.
There are various factors like server configuration, network, think time which can affect your answer. With a lower server config (1 vcpu and 1 gb ram) you can expect a relatively low rps. And this number will improve as you increase server capacity.
Perhaps, follow this thread.

Laravel server memory is increasing constantly

I've got a Laravel application running with about 800-1.000 concurrent users (10.000 users overall). The main time frame of use is from 8 am to 6 pm. However, the server's memory is increasing constantly (see attachment) and reaches its high at about 10 pm (about 100 concurrent users).
I assume that query logging causes the constant increase of memory. I tried to stop it by including DB::disableQueryLog(); into the boot function in the AppServiceProvider.php. However, this doesn't seem to stop query logging. How could I fix this problem?

Azure Table Increased Latency

I'm trying to create an app which can efficiently write data into Azure Table. In order to test storage performance, I created a simple console app, which sends hardcoded entities in a loop. Each entry is 0.1 kByte. Data is sent in batches (100 items in each batch, 10 kBytes each batch). For every batch, I prepare entries with the same partition key, which is generated by incrementing a global counter - so I never send more than one request to the same partition. Also, I control a degree of parallelism by increasing/decreasing the number of threads. Each thread sends batches synchronously (no request overlapping).
If I use 1 thread, I see 5 requests per second (5 batches, 500 entities). At that time Azure portal metrics shows table latency below 100ms - which is quite good.
If I increase the number of treads up to 12 I see x12 increase in outgoing requests. This rate stays stable for a few minutes. But then, for some reason I start being throttled - I see latency increase and requests amount drop.
Below you can see account metrics - highlighted point shows 2K31 transactions (batches) per minute. It is 3850 entries per second. If threads are increased up to 50, then latency increases up to 4 seconds, and transaction rate drops to 700 requests per second.
According to documentation, I should be able to send up to 20K transaction per second within one account (my test account is used only for my performance test). 20K batches mean 200K entries. So the question is why I'm being throttled after 3K entries?
Test details:
Azure Datacenter: West US 2.
My location: Los Angeles.
App is written in C#, uses CosmosDB.Table nuget with the following configuration: ServicePointManager.DefaultConnectionLimit = 250, Nagles Algorithm is disabled.
Host machine is quite powerful with 1Gb internet link (i7, 8 cores, no high CPU, no high memory is observed during the test).
PS: I've read docs
The system's ability to handle a sudden burst of traffic to a partition is limited by the scalability of a single partition server until the load balancing operation kicks-in and rebalances the partition key range.
and waited for 30 mins, but the situation didn't change.
EDIT
I got a comment that E2E Latency doesn't reflect server problem.
So below is a new graph which shows not only E2E latency but also the server's one. As you can see they are almost identical and that makes me think that the source of the problem is not on the client side.

task max thread value in wildfly 10.1

i want to support 7k requests per minute for my system . Considering there are network calls and database calls which might take around 4-5 seconds to complete . how should i configure task max threads and max connections to achieve that ?
This is just math.
7k requests/minute is roughly 120 requests/second.
If each request is taking 5s then you will have roughly 5 x 120 = 600 inflight requests.
That's 600 HTTP connections, 600 threads and possibly 600 database connections.
These numbers are a little simplistic but I think you get the picture.
Note the standard Linux stack size for each thread is 8MB, therefore 600 threads is going to want nearly 5GB of memory just for the stacks. This is configurable at the OS level - but how do you size it?
Therefore you're going to be up for some serious OS tuning if you're planning to run this on a single server instance.

Windows Server Appfabric Caching Timeouts

We have an application that uses Windows Server AppFabric Caching. The cache is on the local machine, local cache is not enabled. Here is the configuration in code, none in .config.
DataCacheFactoryConfiguration configuration= new DataCacheFactoryConfiguration();
configuration.Servers= servers;
configuration.MaxConnectionsToServer= 100; // 100 is maximum
configuration.RequestTimeout= TimeSpan.FromMilliseconds( 1000);
Object expiration on PutAndUnLock is two minutes.
Here are some typical performance monitor values:
Total Data Size Bytes 700MB
Total GetAndLock Requests /sec Average 4
Total Eviction Runs: 0
Total Eviced Objects: 0
Total Object COunt: either 0 or 1.8447e+019 (suspicious, eh?) I think the active object count should be about 500.
This is running on a virtual machine, I don't think we are hardware constrained at all.
The problem: every few minutes, varies from 1 to 20, for a period of one second or so, all requests (Get, GetAndLock, Put, PutAndLock) timeout.
The only remedy I've seen online is to increase RequestTimeout. If we increase to 2 seconds the problem seems to happen somewhat less frequently, but still occurs. We can't increase the timeout more because we need the time to create the object from scratch after the cache times out.

Resources