Reason for going a kannel in isolated state - kannel

My kannel is running since 10 days without problem.
Now it is in isolated status and it is not accepting delivery hit from my operators.
Average load in my kannel is 500 - 700 hits per second.

Related

How I can estimate maximum number of requests per second for J-meter with 8 users

Scenario is
Total Number of Users 50000
Ramp up time = 2 Minutes
Test Duration = 5 Minutes
while I've login credentials of 8 users
So, Please guide that how I can send 50000 requests in 2 minutes with 8 users
Request per Second (RPS) is a result of your load test execution. You cannot estimate it beforehand.
Typically, you have a number in mind like ex. 15rps based on your application history or the research you might have done. While you do load test, you assert if actual rps >= expected rps. Accordingly, you can work report your findings to business team / development team.
There are various factors like server configuration, network, think time which can affect your answer. With a lower server config (1 vcpu and 1 gb ram) you can expect a relatively low rps. And this number will improve as you increase server capacity.
Perhaps, follow this thread.

Records are inserting less in the database when we increase the thread group count from 100 to 200 in Jmeter

Initially i have ran a load test with 100 users for 10 minutes and 1000 records got inserted in the database for the below scenarios.
Employee Creation -- Test script design took 1 minute
Employee Update -- Test script design took 2 minutes
And then I ran the same load test with 200 users for 10 minutes and 1100 records got inserted without any error logs or deadlocks.
My question is when we increase/double the thread group count from 100 to 200, Records insertion also should be double or approximately double. then why is it not happening? Same case with the number requests/samples.
You reached a maximum in your test throughput at about 110 records per min. In other words, you have a bottleneck on client or server, which doesn't allow 200 users to process request concurrently and/or within the same amount of time (either some users wait until they can start processing a request, or each request takes longer, so total number of requests is lower).
Some bottlenecks can be resolved by you (if they are related to script, JMeter configuration or JMeter machine), others have to be resolved on server side (by whoever has access to it), and some cannot be resolved at all (they are true bottlenecks of your app).
Without knowing your application, it's hard to suggest anything beyond general "checklist" items:
Verify JMeter script and check if it has any places where it may wait, take a long time, and so on. For example if your ramp-up period is too high, it may be that "first" user will finish execution, before "last" user even started it. Scriptable samplers, pre- and post-processors may cause delays as well.
Make sure JMeter is configured properly to handle 200 concurrent threads. For example if JMeter heap is set too low, it could be that JMeter is very slow, as it constantly needs to run GC. See this question for how to look at and configure memory (it discusses out of memory error, but even without that error inadequate memory can cause slowness)
Make sure JMeter machine is configured correctly to allow creation of 200+ HTTP connections concurrently. A common issue on both Windows and Linux machine is that people assume that they can have 65535 connections (as maximal number of ports), but in reality, both Windows and Linux limit number of ports they allow by default to be used. Also after the use port may remain in TIME_WAIT or CLOSE_WAIT state for several minutes, which makes it unusable. As a result, running out of ports is quite common. Here's how to monitor and resolve this issue on Windows and Linux.
Check JMeter machine performance as a whole: does it have enough CPU, memory; is it swapping memory, etc.
If none of the above is a problem, you need to look at how requests arrive to the server. If client is capable of sending 200 concurrent requests (which you should have established in previous steps), but server receives them at slower rate, then maybe something in the network slows things down. For example something like slow DNS resolution or slow routing between JMeter and server can cause issues.
Also Item #3 on the client is also applicable to the server.
If requests do arrive to the server at the same speed as they are sent from the client, then probably their processing by the server slows down as number of parallel requests goes up. This is where you are on dev and devOP territory, and probably need to work with them to identify bottlenecks on server side. It could be configuration of the web or application server, application itself, ... anything on app way pretty much.
Performance testing is 10% execution, and 90% analysis and identification of bottlenecks, so here you go.

Jmeter Response Times vs Threads

I am doing API load testing by sending 250 requests at once.
1. Configuration
Naturally, server takes longer to respond when a lot of users requests it simultaneously, this is what it says here.. As per http://jmeter-plugins.org/wiki/ResponseTimesVsThreads/. However when testing this is what I found..
2. Test
The plot above starts from right to left and as the number of active threads decrease, the response time increases.
Is active threads same as number of user requests, if so why this is happening on a consistent basis?
Update-1
Ran another test and increased the ramp-up period this time
No of threads: 200
Ramp-Up Period: 200 secs
Loop Count: 200
There are at least 2 possible explanations:
you don't have a problem, and your improvement in response times comes from caching effect related to your data being in cache after some time. Only you can validate as we don't know if you are using a large enough dataset and how long is your test lasting
you have a problem, your server is rejecting connections under load, so you have very rapid failed responses that have a very good response time. To know if it's your problem, check the response code over time or transactions over time as long as error percentage

what is the maximum number of packets that can be received? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I am developing a P2P application which I am making it scalable to tens of millions of users. I am broadcasting a packet to each of these other peers and expecting a response. Before I go ahead with my coding I wanted to confirm if I can send a packet to 10s of millions of different IP addresses in less than a minute. and if they all respond back then will my application or even my PC be able to handle that many connections and packets in such small time ?
Using TCP and Windows.
Maximum CPU usage allowed : 20%
Internet bandwidth : assume 2Mbps
Application based on C programming using WinSock2
Assume a normal PC with 2 GB Ram, Core 2 Duo, 2.8 GHz
Um, yes you can, but you might need to use UDP. However the responses backc will be a self DoS as well.
Generally applications which need to directly communicate with tens of millions of users within the space of a minute will be based on clusters of servers that have the available internet bandwidth and computational power such that they don't DoS themselves. A P2P application shouldn't require a single host to communicate with that many users, especially not within that short of a time frame.
Ray is correct that, even if you were able to send the message, you will end up DoSing yourself with the response unless you put a lot of varied-length intentional delays in the client programs to space out their responses. He's also correct that you should use UDP if you were to attempt this. I find it very unlikely that your operating system supports maintaining 10,000,000 concurrent TCP connections.
In order to send a notification to tens of millions of hosts from a single host, the original host should notify some small subset of size n of the list of tens of millions of hosts. Each of those hosts would, in turn, notify n more hosts and so on. This would require on the order of n log_n_(total number of hosts) time vs. on the order of the number of hosts time.
If the response message is simply an acknowledgement of the receipt of the original message, a system opposite this can be used for the acknowledgements. Each host can send an ack to the one that sent it the message, then once that host has received all of the acks or a timeout occurs, it sends an ack to the host that sent it the message which includes the information of which hosts have already sent it an ack. This process continues back up the tree until it the combined acks reach the original host. This means you receive on the order of n responses back to the original host, not on the order of tens of millions.
If the response is more than just an ack, then your application is probably not scalable for anything remotely close to the hardware you describe as that's going to be way too much incoming data in too short of a time. Most likely you'll DoS yourself and quite possibly get a nastygram from your ISP.

Understanding RESTful Web Service stress test results

I'm trying to stress-test my Spring RESTful Web Service.
I run my Tomcat server on a Intel Core 2 Duo notebook, 4 GB of RAM. I know it's not a real server machine, but i've only this and it's only for study purpose.
For the test, I run JMeter on a remote machine and communication is through a private WLAN with a central wireless router. I prefer to test this from wireless connection because it would be accessed from mobile clients. With JMeter i run a group of 50 threads, starting one thread per second, then after 50 seconds all threads are running. Each thread sends repeatedly an HTTP request to the server, containing a small JSON object to be processed, and sleeping on each iteration for an amount of time equals to the sum of a 100 milliseconds constant delay and a random value of gaussian distribution with standard deviation of 100 milliseconds. I use some JMeter plugins for graphs.
Here are the results:
I can't figure out why mi hits per seconds doesn't pass the 100 threshold (in the graph they are multiplied per 10), beacuse with this configuration it should have been higher than this value (50 thread sending at least three times would generate 150 hit/sec). I don't get any error message from server, and all seems to work well. I've tried even more and more configurations, but i can't get more than 100 hit/sec.
Why?
[EDIT] Many time I notice a substantial performance degradation from some point on without any visible cause: no error response messages on client, only ok http response messages, and all seems to work well on the server too, but looking at the reports:
As you can notice, something happens between 01:54 and 02:14: hits per sec decreases, and response time increase, okay it could be a server overload, but what about the cpu decreasing? This is not compatible with the congestion hypothesis.
I want to notice that you've chosen very well which rows to display on Composite Graph. It's enough to make some conclusions:
Make note that Hits Per Second perfectly correlates with CPU usage. This means you have "CPU-bound" system, and the maximum performance is mostly limited by CPU. This is very important to remember: server resources spent by Hits, not active users. You may disable your sleep timers at all and still will receive the same 80-90 Hits/s.
The maximum level of CPU is somewhere at 80%, so I assume you run Windows OS (Win7?) on your machine. I used to see that it's impossible to achieve 100% CPU utilization on Windows machine, it just does not allow to spend the last 20%. And if you achieved the maximum, then you see your installation's capacity limit. It just has not enough CPU resources to serve more requests. To fight this bottleneck you should either give more CPU (use another server with higher level CPU hardware), or configure OS to let you use up to 100% (I don't know if it is applicable), or optimize your system (code, OS settings) to spend less CPU to serve single request.
For the second graph I'd suppose something is downloaded via the router, or something happens on JMeter machine. "Something happens" means some task is running. This may be your friend who just wanted to do some "grep error.log", or some scheduled task is running. To pin this down you should look at the router resources and jmeter machine resources at the degradation situation. There must be a process that swallows CPU/DISK/Network.

Resources