Why does Apache+mod_php on Windows require low RAM usage? - windows

I have Apache + mod_php installed on Windows but I can't relate it to any of those used on Linux.
It only has this:
# ThreadsPerChild: constant number of worker threads in the server process
# MaxRequestsPerChild: maximum number of requests a server process serves
ThreadsPerChild 250
MaxRequestsPerChild 0
regarding the children.
httpd.exe only takes 12MB of RAM and if I do an "ab" test with a script with only sleep(10) with 30 concurrent connections it only goes to 30MB of usage and it can take all of them together! I did the same on my ubuntu vps also in mod_php and to get 30 concurrent connections I had to start 30 servers and the VPS basically crashed because the RAM usage went over 200MB for the apache processes only. So the question is why the RAM used is -so- little?

Related

Why does RabbitMQ becomes unresponsive?

RabbitMQ 3.7.13 on Microsoft Windows Server 2012 R2 Standard 32GB RAM 48GB page file
Very low utilization - 10 queues , 20 clients , hundreds of messages per day , < 1MB size
Ran fine for 1 year then started becoming unresponsive in a consistent pattern :
Restart RabbitMQ Windows Service
RabbitMQ accepts new connections and processes messages
Connections/sockets start ramping from 940 up to max 7280 in ~10 mins
RabbitMQ stops accepting new connections and becomes unresponsive, dashboard shows 500 Internal Server error
When this started happening 2 weeks ago , restarting service would buy about 24 hours of working time before Rabbit became unresponsive again. But that progressively decreased until now a restart only provides 10 mins uptime.
Looking at server memory history shows some occasional spikes to max capacity.
What could be causing this ? What are some diagnostic techniques to apply ?

timer expired ...abort Jmeter ERROR(Non gui mode) - while running the Load test up to 3000 Users

I was performing the Load test up to 3000 users on 16GB RAM and on EC2 instance. when the user reached 2000+, I got the Error timer expired, ...abort in command prompt. During this period Memory & CPU usage was up to 96%
Let me know is this error occurred due to Memory and CPU usage went up to 96% ?or anything to do with Jmeter scripts. bcoz Up to 1K it was working properly.
According to this answer to the JVM crashes with no frame specified, only “timer expired, abort” it occurs when a fatal error has occurred and the JVM wasn't properly switched off in 2 minutes time frame.
Most probably 3000 users is way too much for your test for that VM. Moreover 96% of usage of CPU and RAM is a little bit too much for me, I would rather stop ramping up the users when CPU/RAM usage starts exceeding 80% of total available capacity.
So the next step would be:
Configure JMeter to ramp-up users gradually, i.e. 3000 users in 15 minutes or so
Make sure to follow JMeter Best Practices
Make sure to setup realtime monitoring of CPU, RAM and other resources
Start your test
See how many virtual users were online when resources usage starts exceeding 80% using ie. Active Threads Over Time listener
If you have 3000 users - you should be good to go, if not - you will need to create another machine and to run JMeter in Distributed Mode

Why latency of requests to Jetty on EC2 Linux high?

I'm running jetty-distribution-9.3.0.v20150612 on Java(TM) SE Runtime Environment (build 1.8.0_51-b16) over AWS EC2 m1.small Linux machine.
It communicates with mobile apps with a mean count of 36 hits per minute, about 60% of traffic using HTTP/2.0, mean CPU utilisation is ~15% at peak and network i/o stands around 5 MB per minute, so it doesn't have any resource choking due to traffic.
Jetty's AsyncNCSARequestLog latency logging shows an average latency of around 2000 ms. As explained in this post, latency is calculated (now - request.getTimeStamp()), so it does not separate the time it took Jetty to handle the request between the time it took to create the HTTP connection.
How do I analyse the request's latency in order to find the bottle neck?

Kibana 4 RAM consumption

I installed Kibana 4.3.0 on my VPS having one CPU core and 2GB of RAM running Ubuntu 14.04.3.
Kibana works and my dashboard works as expected, but unfortunately it consumes too much RAM so the VPS begins to swap and has a very high system load.
There is not much data put into ES (about 192 temperature entries per day) so Kibana 4 should not consume too much memory.
Is there any possibility to configure Kibana 4 to consume less RAM, i.e. 256MB at the maximum?
in this thread I found a solution for the memory consumption: https://github.com/elastic/kibana/issues/5170
It seems to be a Node.js problem. Changing the last line in bin/kibana start script to
exec "${NODE}" --max-old-space-size=100 "${DIR}/src/cli" ${#}
as suggested in the thread helped.

Slow Magento Enterprise

We are using Magento Enterprise Edition on Nginx and FPC. Dedicated servers with ample ram and CPU. Everything runs fine with 60-70 visitors. However during high traffic like over 200 active visitors, we starts to have problem. During peak traffic our CPU is still under 10% and with 40% free memory. We have a dedicated App and DB server.
What could be wrong? Could this be a network issue? What are the chances that there is a problem with App server or the code base is not optimized given the fact that CPU is under 10% with ample ram.
Steve
Edit:
I am running a 32 core App server with 64GB of RAM. Have Nginx with PHP-FPM and FASTCGI. Upon checking the logs I found that that PHP-FPM has following errors during peak Hours:
[WARNING] [pool www] child 26196, script '/var/www/magento/index.php' execution timed out (600.011284 sec), terminating
I have 32 Workers process along with worker_connections 1024;
CDN is already setup and network is set to use 1G connection.

Resources