I have Collabnet subversion edge installed on my windows server 2008 R2 standard (x64 bit)). I am using only Collabnet subversion with apache configured manually configured by me.
svn version is 1.8.13 and apache version is 2.4.12.
Authentication: using AD
CPU:4
RAM:16 GB
Problem statement: server is going down again and again because it is reaching CPU 100%. When i checked which process is causing this issue, i can see that it is httpd.exe consuming all cpu when i just kill it cpu will come down to zero.
So far i am not successful in identifying what is the exact root cause for this, however in the error log i found one line which says [mpm_winnt:error] [pid 3448:tid 3040] AH00326: Server ran out of threads to server requests. Consider raising the ThreadsPerChild setting. After going through apache documentation i came to know that we have a mpm (multi process module) module to handle the number of threads per child, so did the below change in my httpd.conf:
AcceptFilter http none
AcceptFilter https none
<IfModule mpm_winnt_module>
ThreadsPerChild 200
MaxConnectionPerChild 10000
</IfModule>
And also did one more change after going through some web links which says LDAP caching also cause CPU reaching 100% hence, i made caching zero using the below line
LDAPSharedCacheSize 0.
After the above two changes my server was running fine for one month.
Looks like it has a side effect. I got the complaint from my user that: every day first fetch to the repository is taking time. then i removed LDAPSharedCacheSize 0 from my httpd.conf But, vary next day CPU again reached 100%.
Can anybody help me if my configuration is wrong or i need to modify the configuration in my httpd.conf?
Related
I'm running an IoTivity client that rediscovers available resources every 20 seconds. At the start it works fine. However, after around 2 and a half minutes the client doesn't discover the resource anymore. When I restart the client it still doesn't find it but when I restart the server the client rediscover it but for 2:30 min. Why is this happening and how can I fix it?
I'm using IoTivity 1.2.1 and I'm running the server and the client on different embedded devices.
If you mean the "classic" iotivity, not only is 1.2.1 rather old (last release was 1.4), but that project has largely been abandoned except for historical interest, in favor of iotivity-lite.
i have setup a simple loadbalancer using Apacher 2.4 for 2 tomcat servers. i have noticed that the BUSY column in the balancer-manager page never decreases and keep increasing until both of them reach around 200, the performance will be very sluggish.
i cannot find any documentation detailing about the balancer-manager frontend but i guessing the BUSY column is referring to the number of open connections to the balancer members. is that right?
does my apache LB doesnt close idles connection and keep opening new one until it exhausted the resources.
Please guide me on this. i have to keep restarting apache services every week in order to reset the BUSY column and make the LB smooth again.
Server running on Windows 2003 + Apache 2.4.4
Using localhost and Tomcat 7, I'm seeing between 600-800ms per request in Chrome Developer tools for a specific webapp. Requests are JS files, CSS files, images or the initial server response. Some responses are less than 1KB, others are over 100KB.
As a result, it's taking around 10 seconds to load one page of the webapp. When I load the same webapp on our production server, it's taking less than 1 second to load an entire page.
I'm not sure where to continue debugging the issue...
I've ruled out it being a browser issue by testing in Safari too.
I've turned it off and on again
Reduced response to 500-600ms overall
I've cleared out my log files
I've ruled out the webapp's frontend entirely by hitting a resource directly, ex: http://ts.xyz.com:9091/1.0/toolsList/javascript/toolsList.js or http://ts.xyz.com:9091/awake
I've tested another webapp and that performs lightning-quick
So, it has to be this particular app and it has to be locally.
I've seen such behaviour long time ago when the webserver (Apache httpd back then) was configured to make DNS lookups for logs - these took awfully long time especially when an IP could not be resolved. As it doesn't make sense for a localhost app to be orders of magnitude slower (especially when you're talking about serving static resources) I'd check for any network related issues: Database connections, logging configurations, DNS lookups, TLS server trust issues (with backends, database, LDAP or others).
I can't decide if I add this as "if everything else fails" or rather add this as "but first try this:"... you decide:
Compare the setup of your production server with your development server (localhost) and make extra extra extra sure that there's no meaningful difference.
Help me find the reason why before downloading any of the pages is a delay of about 2 minutes, and then abruptly loaded site. No changes to the site did not do throughout the year. Before, everything was fine, 2 weeks ago is started. Site: www.proudandcurvy.co.uk
It's probably some kind of DNS timeout, is the web server configured to do DNS lookups? You really want to turn that off.
You use the Apache HTTPD, the configuration option that you should be looking for is
HostnameLookups and it should be set to Off.
I am scanning some servers with Nessus and there is something I do not understand.
Nessus detect that the web server is Apache/2.2.16 (on Debian). If yo go to http://httpd.apache.org/security/vulnerabilities_22.html you can see a lot of vulnerabilities that affect this Apache version.
However, the Nessus did not detect nothing related to theses vulnerabilities. For example, the plugin 50070 "Apache 2.2 > 2.2.17 Multiple Vulnerabilities" was not fired.
I have check that this plugin and all the available are activated (I did a complete scan with all plugins activated).
So my question is why Nessus did not notify me that I am running a old Apache version with the vulnerabilities listed on http://httpd.apache.org/security/vulnerabilities_22.html ? I thing that notifying me with
important: Range header remote DoS CVE-2011-3192
A flaw was found in the way the Apache HTTP Server handled Range HTTP headers. A remote attacker could use this flaw to cause httpd to use an excessive amount of memory and CPU time via HTTP requests with a specially-crafted Range header. This could be used in a denial of service attack.
is important.
Thanks in advance :)
I recommend reducing your performance settings(Max simultaneous checks per host, Max simultaneous hosts per scan) so that you get more accurate results as a result of the scan.
Nessus does not know how to look for this vulnerability.