Goliath poor performance: short response time, long wait time - ruby

So I'm use Goliath to develop an api, /list/users, it is very simple, just query mysql and return.
the request itself takes Response Time: 53.84ms, but if I do a press test with 10 threads to request the server by ab, I can only get 20 requests/second.
At meantime, I access the request in Chrome, I saw wait time: 400ms
What is wrong? how can I improve it?
I also created a nodejs version /list/users. the request itself also takes about 50ms, but I can get 130 requests/second when press test, and the wait time is almost 10ms.
Do I did something wrong, is there any setting need to be done fr Goliath?
And also I want to know why nodejs can have more requests/second since the single request response time is same?

Did you run goliath in Production mode? In development it does code reloading which will negatively impact performance. -e prod will put the server in production mode.

Related

JMeter Connection Timeout after 15 Min Even though increasing the Timeout is not helping

JMeter When the test is scheduled one of the Http request timeout happens exactly after 15 Min.
Even though tried after increasing the timeout to 30 Min in the Http Request Default header > Advance tab, In the Http requests and in the jmeter.properties file(httpclient timeout) still it is not helping gets timeout for the particular http request after 15 Min exactly, Currently tried on both Jmeter 4.0 and 5.0 as well, Need to know why its not considering the custom timeout specified.
By default JMeter HTTP Request sampler doesn't have any timeouts so it will wait forever.
So if you remove all the timeouts - you will disable them completely. The side effect is that if your application will never respond - your test will never end.
Assuming above information my expectation is that there is a 15 minutes timeout somewhere on the server side, check your system under test configuration and any middleware (reverse proxies, load balancers, etc.)
Response time over 15 minutes is very suspicious itself, maybe the nature of application is quite exotic, however I can hardly imagine a user having to wait for 15+ minutes before the next action. I would suggest using profiling and/or APM tools integration with your load test, this way you will be able to get full picture of what's going on during your testing on the server side. If you don't have any monitoring tools in place you can consider using JMeter PerfMon Plugin

JMeter response time decreases after the first run of test plan

I have a test plan setup which I am using on my web application. It is pretty simple , a user logs in and then navigates through some of the pages. Everything's working fine except the fact that whenever I run the test plan for the first time(say first time after restarting the web application server) the average response time recorded are around 18000ms but in the susequent runs it is always around 3000ms until i restart the server. I just want to know why this is happening. Pardon me, I am newbie to this and thanks in advance.
You can start to exclude some part of test plan and try again. If this response time does not decrase then you can focus your web application server thread pool size. If it is very small and your Jmeter test plan needs more than this size then application server try to create new threads. When you increase your min thread pool size on app server the response time is still high then need to focus what your test plan does. By the way, I want to have a look at your test plan if you share.

load duration of web page differs

for testing purposes I measure the time it takes for parsing, db accessing, posting and rendering of one of my web php web pages in the browser (by using Firebug's network tool). When I press F5 after clearing the cache by "Delete recent data" it takes about 5 seconds, when I hit Ctrl-F5 it takes about 20 seconds.
Isn't that the same? What's the difference between them? What is the recommended way to test the performance of php code and db access?
Thank you very much in advance ...
There could be any number of reasons all of which have to do with the implementation of firebug.
You cannot test the performance on the client side since clients differ a lot and also have the network latency which is even harder rule out.
You should do this all on the server side: start a timer when the request reaches your web server and then stop it when it exits. If that is a bit difficult then in the PHP script itself you can run a wrapper script that has a start timer, a require statement for the script you want and a stop timer.

browser implication when ajax call every 3 sec

We would like to check every 3 seconds if there are any updates in our database, using jquery $.ajax. Technology is clear but are there any reasons why not to fire so many ajax calls? (browser, cache, performance, etc.). The web application is running for round about 10 hrs per day on every client.
We are using Firefox.
Ajax calls has implications not on client side(Browser,...) but on the server side. For example, every ajax call is a hit on server. ie. more bandwidth consumption, no of server request hit increases which in turn increases server load etc etc. Ajax call is actually meant to increase client friendliness at the cost of Server side implications.
Regards,
Ravi
You should think carefully before implementing infinite repeating AJAX calls with an arbitrary delay between them. How did you come up with 3 seconds? If you're going to be polling your server in this way, you need to reduce the frequency of requests to as low a number as possible. Here are some things to think about:
Is the data you're fetching really going to change that often?
Can your server handle a request every 3 seconds, how long does the operation take for a single request?
Could you increase the delay after inactivity or guess based on previous server responses how long the next delay should be?
Can you stop the polling completely when the window loses focus, and restart it when it's in the foreground again.
If a user opens the same page in a website 10 times, your server should recognise this and throttle its responses, either using a cookie with a unique value in it (recommended) or based on the client IP address.
Above all, instead of polling, consider using HTML 5 web sockets to "push" data to the client - most modern browsers support this. Several frameworks are available that will fall back to polling if web sockets are not available - one excellent .NET example is SignalR.
I've seen a lot of application making request each 5sec or so, for instance a remote control (web player) or a chat. So that should not be a problem for the browser to do so.
What would be a good practice is to wait an answer before making a new request, that means not firing the requests with a setInterval for instance.
(In the case the user lose its connection that would prevent opening too much connections).
Also verifying that all the calculations associated with an answer are done when receiving the next answer.
And if you have access to that in the server side, configure you server to set http headers Connection: Keep-Alive, so you won't add to much TCP overhead to each of your requests. That could speed up small requests a lot.
The last point I see is of course verifying that you server is able to answer that much request.
You are looking for any changes after each 3sec , In this way the traffic would be increased as you fetching data after short duration and continuously . It may also continuous increase the memory usage on browser side . As you need to check any update done in the database , you can go for any other alternatives like Sheepjax , Comet or SignalR . (SignalR generally broadcast the data to all users and comet needs license ) . Hope this may help you .

Is there any reson to not reduce Ping Maximum Response Time in IIS 7

IIS includes a worker process health check "ping" function that pings worker processes every 90 seconds by default and recycles them if they don't respond. I have an application that is chronically putting app pools into a bad state and I'm curious if there is any reason not to lower this time to force IIS to recycle a failed worker process quicker. Searching the web all I can find is people that are increasing the time to allow for debugging. It seems like 90 seconds is far to high for a web application, but perhaps I'm missing something.
Well the obvious answer is that in some situations requests would take longer than 90 seconds for the worker process to return. If you can't imagine a situation where this would be appropriate, then feel free to lower it.
I wouldn't recommend going too much lower than 30 seconds. I can see situations where you get in recycle loops. However you can do testing and see what makes sense in your situation. I would recommend Siege for load testing to see how your application behaves.

Resources