Yii2 basic application page - load testing with apache jmeter freezes server - windows

I had configured yii2 basic application template in windows server (Dual core processor, 8 GB RAM) No extra code written other than just installing it.When testing with apache jmeter with 100 concurrent users in 10 min CPU usage get hit 99% and the server freezes.Normal static PHP page would work without any issues under the same test, if placed outside the framework.It take around 2-3 percent of CPU utilisation.

If you're allowing to run 100 concurrent PHP processes on 2-core CPU, this is more like issue with your server configuration - each process gets less that 1% of your CPU, which makes everything really slow. You should limit number of PHP processes (in php-fpm config for example) and queue them at webserver level - it is better to process 20 concurrent request at the same time and do it fast, than process 100 and do it slow.
You should start from guide tutorial about Yii optimization.
Definitely disable debug mode.
Use more efficient backend for cache (like APCu or redis/memcache) and session.
Disable Yii autoloader and use optimized autoloader from composer: https://www.yiiframework.com/doc/guide/2.0/en/concept-autoloading#using-other-autoloaders
You may also look at application prepared for basic benchmark and compare configs.

As per Patrick comment, you're comparing a simple PHP page with a more complex framework.
There are tons of possible reasons for your issues:
misconfiguration of YII2
Unrealistic JMeter test, you don't give any information on it
Issue in YII2

Related

How many users should a EC2 Micro Instance be able to handle only with a nginx server?

I have a iOS Social App.
This app talks to my server to do updates & retrieval fairly often. Mostly small text as JSON. Sometimes users will upload pictures that my web-server will then upload to a S3 Bucket. No pictures or any other type of file will be retrieved from the web-server
The EC2 Micro Ubuntu 13.04 Instance runs PHP 5.5, PHP-FPM and NGINX. Cache is handled by Elastic Cache using Redis and the database connects to a separate m1.large MongoDB server. The content can be fairly dynamic as newsfeed can be dynamic.
I am a total newbie in regards to configuring NGINX for performance and I am trying to see whether I've configured my server properly or not.
I am using Siege to test my server load but I can't find any type of statistics on how many concurrent users / page loads should my system be able to handle so that I know that I've done something right or something wrong.
What amount of concurrent users / page load should my server be able to handle?
I guess if I cant get hold on statistic from experience what should be easy, medium, and extreme for my micro instance?
I am aware that there are several other questions asking similar things. But none provide any sort of estimates for a similar system, which is what I am looking for.
I haven't tried nginx on microinstance for the reasons Jonathan pointed out. If you consume cpu burst you will be throttled very hard and your app will become unusable.
IF you want to follow that path I would recommend:
Try to cap cpu usage for nginx and php5-fpm to make sure you do not go over the thereshold of cpu penalities. I have no ideia what that thereshold is. I believe the main problem with micro instance is to maintain a consistent cpu availability. If you go over the cap you are screwed.
Try to use fastcgi_cache, if possible. You want to hit php5-fpm only if really needed.
Keep in mind that gzipping on the fly will eat alot of cpu. I mean alot of cpu (for a instance that has almost none cpu power). If you can use gzip_static, do it. But I believe you cannot.
As for statistics, you will need to do that yourself. I have statistics for m1.small but none for micro. Start by making nginx serve a static html file with very few kb. Do a siege benchmark mode with 10 concurrent users for 10 minutes and measure. Make sure you are sieging from a stronger machine.
siege -b -c10 -t600s 'http:// private-ip /test.html'
You will probably see the effects of cpu throttle by just doing that! What you want to keep an eye on is the transactions per second and how much throughput can the nginx serve. Keep in mind that m1small max is 35mb/s so m1.micro will be even less.
Then, move to a json response. Try gzipping. See how much concurrent requests per second you can get.
And dont forget to come back here and report your numbers.
Best regards.
Micro instances are unique in that they use a burstable profile. While you may get up two 2 ECU's in terms of performance for a short period of time, after it uses its burstable allotment it will be limited to around 0.1 or 0.2 ECU. Eventually the allotment resets and you can get 2 ECU's again.
Much of this is going to come down to how CPU/Memory heavy your application is. It sounds like you have it pretty well optimized already.

ASP.NET MVC lost in finding botleneck

I have ASP.NET MVC app which accept file uploads and has result pooling using SignalR. The app hosted on Prod server with IIS7, 4 Gb Ram and two cores CPU.
The app on Dev server works perfectly but when I host it on Prod server with about 50 000 users per day the app become unrresponsible after five minutes of running. The web page request time increase dramatically and it takes about 30 seconds to load one page. I have tried to record all MvcApplication.Application_BeginRequest event call and got 9000 hits in 5 minutes. Not sure is this acceptable number of hits or not for app like this.
I have used ANTS Performance Profiler(not useful in Prod app profiling, slow and eats all memory) to profile code but profiler do not show any time delay issues in my code/MSSQL queries.
Also I have tried to monitored CPU and RAM spike problems but I didn't find any. CPU percentage sometimes goes to 15% but never up and memory usage is normal.
I suspect that there is something wrong with request or threads limits in ASP.NET/IIS7 but don't know how to profile it.
Could someone suggest any profiling solutions which could help in this situation? Tried to hunt the problem for two week already without any result :(
You may try using the MiniProfiler and more specifically the MiniProfiler.MVC3 NuGet package which is specifically created for ASP.NET MVC applications. It will show you all kind of useful information such as the time spend for different methods in the execution of the request.

JMeter Running Very Slowly

I'm using JMeter to test a Java application written by a 3rd party vendor using Versata Logic Studio.
I've got some steps in my test plan that submit a request using some post data and then receive a response back:
Response too large to be displayed. Size: 445817 > Max: 204800, Start of message:
{"header":{"action":"300","arguments":{"tabid":"Header","divid":"ActgDisb,Vendor,BusinessType...ETC
This seems fine (I'm guessing that's 400K?), except that the step is taking far longer than it does to click through the pages in a browser. In the browser it takes 5 seconds at most. In JMeter it's taking 2 minutes. The CPU is also at 60% for just one thread during these steps.
Any ideas on speeding this up? We're struggling to get enough slaves going and this certainly isn't helping.
The message that is displayed tells that you are using View Result Tree during your load test. Jmeter sets a limit that can be changed on the size of pages displayed in this component by adding to user.properties file:
view.results.tree.max_size which defaults to 200 ko
BUT during a load test never ever use this component as it requires a lot of resources (memory and cpu). This component must only be used during scripting phase.
You can read this article that gives tips on JMeter configuration and tuning:
http://www.ubik-ingenierie.com/blog/jmeter_performance_tuning_tips/
Disclaimer : I wrote it and it's my company but IMHO I think it's worth reading :-)
Also read this:
http://jmeter.apache.org/usermanual/component_reference.html
If your JMeter script is using a lot of file I/O, then putting those files in RAM will significantly improve the speed. You can use any app such as IMDisk (freeware) to create a virtual disk in RAM. Make sure that you have more than 4GB RAM.
In our case, we are sending around 8000 small files per user. With 200 users on each system, Jmeter is reading 16 Lack files. This was the bottleneck. With the RAM Disk, the file read speed was increased by 20 times and it helped Jmeter to run at full speed.
If you're ready to move to TCP level, there is HTTP Raw Request that allows memory-efficient operation for huge uploads/downloads. Read its manual carefully, there is some JMeter properties for tuning its performance.
However, my experience is that you possibly have a situation where Java itself is a bad technology to perform load tests. I suggest you to take a pair of tries for Raw Request and in case of failure to seek for some C/C++ tool for performance tests.

Performance Zend Soap Service on LAMP

I have developed 2 soap webservices in my zend application. In my development environment (MAMP on mac 8 GB ram i7 processor) the performance is really good. When I deploy it on my Ubuntu LAMP server (1 GB RAM 1 processor) the performance decreases a lot. Its more than 10 times slower.
I have a java client (eclipse autogenerated client from wsdl) The problem is that the first call is always 4 times slower than the second one. This goes for both my MAMP and LAMP.
MAMP
- First call 400 ms
- Second call 100 ms
LAMP
- First call 2 000 ms
- Second call 400 ms
I simply duplicate the request so the request is exactley the same for the first and second call.
If I manually run the LAMP client several times the first call will be done at around 900 ms. It feels as if the Zend application has to "startup" something during the first call.
Does anyone have any clue on how I can get around this? What I've tried:
Make sure the wsdl is cached
Installed xcache (not shipped with LAMP)
Read tunings tutroials
Thanks in advance!
This performance issue often occurs when you use Zend_Soap_AutoDiscovery for wsdl generation. If that is the case for your code, you should consider storing your generated wsdl as a separate xml file and load it in the Zend_Soap_Server constructor.
This looks like a problem with opcode cache. Without opcode cache, Zend's really slow. And it gets a ncie boost when using it.
I'd look for Zend Optimizer, eAccelerator, or simillar...
That would be why it slows down after some idle time (classes/files are wiped from IO cache).

My Drupal site uses approx 30mb per page (node & user profiles) load. Acceptable or not?

I am using drupal and my website uses approx 30mb per page load for nodes and user profiles. My website has round about 150 contributed modules in addition to a few core optional modules. But most of them are small and installed to improve user experience.
My php memory limit is 128mb.
Is 30mb per page acceptable?? And how many page loads can be handled by it easily in 128mb??
Any idea?
Honestly, at 30MB your app is just sipping on memory. The PHP memory limits are set pretty low.
As far as how many "page loads can be handled by 128MB" of memory, well, that's not really valid. When a request comes in, Apache (or whatever server you're using) hands the request to mod_php or FCGI and your PHP code is interpreted, compiled, run, and then quit. The "application" doesn't act like a daemon waiting for requests to come in, so the memory it consumes is used for the duration of the request and then it gets released for use by other requests/processes.
That 128MB limit is per request. That means that so long as you have enough memory (and Apache child processes, etc) you can handle additional requests. If you want to see how your application performs under load, check out apachebench.

Resources