I am scanning some servers with Nessus and there is something I do not understand.
Nessus detect that the web server is Apache/2.2.16 (on Debian). If yo go to http://httpd.apache.org/security/vulnerabilities_22.html you can see a lot of vulnerabilities that affect this Apache version.
However, the Nessus did not detect nothing related to theses vulnerabilities. For example, the plugin 50070 "Apache 2.2 > 2.2.17 Multiple Vulnerabilities" was not fired.
I have check that this plugin and all the available are activated (I did a complete scan with all plugins activated).
So my question is why Nessus did not notify me that I am running a old Apache version with the vulnerabilities listed on http://httpd.apache.org/security/vulnerabilities_22.html ? I thing that notifying me with
important: Range header remote DoS CVE-2011-3192
A flaw was found in the way the Apache HTTP Server handled Range HTTP headers. A remote attacker could use this flaw to cause httpd to use an excessive amount of memory and CPU time via HTTP requests with a specially-crafted Range header. This could be used in a denial of service attack.
is important.
Thanks in advance :)
I recommend reducing your performance settings(Max simultaneous checks per host, Max simultaneous hosts per scan) so that you get more accurate results as a result of the scan.
Nessus does not know how to look for this vulnerability.
Related
I have an app deployed on a wildfly server on the Jelastic PaaS. This app functions normally with a few users. I'm trying to do some load tests, by using JMeter, in this case calling a REST api 300 times in 1 second.
This leads to around 60% error rate on the requests, all of them being 503 (service temporarily unavailable). I don't know what things I have to tweak in the environment to get rid of those errors. I'm pretty sure it's not my app's fault, since it is not heavy and i get the same results even trying to test the load on the Index page.
The topology of the environment is simply 1 wildfly node (with 20 cloudlets) and a Postgres database with 20 cloudlets. I had fancier topologies, but trying to narrow the problem down I cut the load balancer (NGINX) and the multiple wildfly nodes.
Requests via the shared load balancer (i.e. when your internet facing node does not have a public IP) face strict QoS limits to protect platform stability. The whole point of the shared load balancer is it's shared by many users, so you can't take 100% of its resources for yourself.
With a public IP, your traffic goes straight from the internet to your node and therefore those QoS limits are not needed or applicable.
As stated in the documentation, you need a public IP for production workloads (a load test should be considered 'production' in this context).
I don't know what things I have to tweak in the environment to get rid of those errors
we don't know either and as your question doesn't provide sufficient level of details we can come up only with generic suggestions like:
Check WildFly log for any suspicious entries. HTTP 503 is a server-side error so it should be logged along with the stacktrace which will lead you to the root cause
Check whether Wildfly instance(s) have enough headroom to operate in terms of CPU, RAM, et, it can be done using i.e. JMeter PerfMon Plugin
Check JVM and WildFly specific JMX metrics using JVisualVM or the aforementioned JMeter PerfMon Plugin
Double check Undertow subsystem configuration for any connection/request/rate limiting entries
Use a profiler tool like JProfiler or YourKit to see what are the slowest functions, largest objects, etc.
Today i got issue Non Http Response code when run script on Jmeter. My script run over some steps (Login - view) but got this issue and have log that issue at NoHttpResponseException.
I'm using Jmeter version 3.3. And I think that maybe this issue from server side, not by my script.
Does anyone fix this issue before? Please support me to resolve it.
This status code is being returned when an Exception occurs during HTTP Request sampler execution. There are hundreds or thousands of possible exceptions and even more potential causes for them.
If it occurs only under the load - most probably it's a server side error and you need to check the application under test logs and monitoring software results to identify the cause
It might be something described in the Connection Reset since JMeter 2.10 ?
It might be the case your JMeter script is badly designed/implemented and you're sending garbage instead of proper HTTP request
So try to collect as much information as you can:
Application under test and JMeter logs (it includes any middleware such as reverse proxies, load balancers, databases, etc.)
Application under test and JMeter machines health metrics (CPU, RAM, Network, Disk, Swap)
Network layer information, i.e. HTTP Request and response details.
Also be aware that according to JMeter Best Practices you should always be using the latest version of JMeter so consider upgrading to JMeter 5.0 (or whatever is the current latest JMeter version available at Downloads page) as soon as it will be possible.
I made a few test downloads using the Jetty 9 server, where it is made multiple downloads of a single file with an approximate size of 80 MB. When smaller number of downloads and the time of 55 seconds is not reached to download, all usually end, however if any downloads in progress after 55 seconds the flow of the network simply to download and no more remains.
I tried already set the timeout and the buffer Jetty, though this has not worked. Has anyone had this problem or have any suggestions on how to solve? Tests on IIS and Apache Server work very well. Use JMeter for testing.
Marcus, maybe you are just hitting Jetty bug 472621?
Edit: The mentioned bug is a separate timeout in Jetty that applies to the total operation, not just idle time. So by setting the http.timeout property you essentially define a maximum time any download is allowed to take, which in turn may cause timeout errors for slow clients and/or large downloads.
Cheers,
momo
A timeout means your client isn't reading fast enough.
JMeter isn't reading the response data fast enough, so the connection sits idle long enough that it idle times out and disconnects.
We test with 800MB and 2GB files regularly.
On using HTTP/1.0, HTTP/1.1, and HTTP/2 protocols.
Using normal (plaintext) connections, and secured TLS connections.
With responses being delivered in as many Transfer-Encodings and Content-Encodings as we can think of (compressed, gzip, chunked, ranged, etc.).
We do all of these tests using our own test infrastructure, often spinning up many many Amazon EC2 nodes to perform a load test that can sufficiently test the server demands (a typical test is 20 client nodes to 1 server node)
When testing large responses, you'll need to be aware of the protocol (HTTP/1.x vs HTTP/2) and how persistence behavior of that protocol can change the request / response latency. In the real world you wont have multiple large requests after each other on the same persisted connection via HTTP/1 (on HTTP/2 the multiple requests would be parallel and be sent at the same time).
Be sure you setup your JMeter to use HTTP/1.1 and not use persisted connections. (see JMeter documentation for help on that)
Also be aware of your bandwidth for your testing, its very common to blame a server (any server) for not performing fast enough, when the test itself is sloppily setup and has expectations that far exceed the bandwidth of the network itself.
Next, don't test with the same machine, this sort of load test would need multiple machines (1 for the server, and 4+ for the client)
Lastly, when load testing, you'll want to become intimately aware of your networking configurations on your server (and to a lesser extent, your client test machines) to maximize your network configuration for high load. Default configurations for OS's are rarely sufficient to handle proper load testing.
I'm starting to work with NEST.
I've seen in a previous question that I should use TryConnect only once at the beginning of the program and then use Connect.
But that seems a bit too naive for a long running system.
What if I have a cluster of say 3 machines and I want to make sure I can connect to any of the 3 machines?
What should be the recommended way of doing that?
Should I:
- Use TryConnect each time and use a different host + port if it fails (downside - an additional roundtrip each time)?
- Try to work with a client and have some retry mechanism to handle failures due to connectivity issues? Maybe implement a connection pool on top of that?
Any other option?
Any suggestions/recommendations?
Sample code?
Thanks for your help,
Ron
Connection pooling is an often requested feature, but due to the many heuristics involved and different approaches NEST does not come with this out of the box. You will have to implement this yourself.
I would not recommend calling TryConnect() before each call as now you are doing two calls instead of one.
Each NEST call returns a IResponse which you can check for IsValid, ConnectionStatus will hold the request and response details.
See also the documentation on handling responses
In 1.0 NEST will start to throw an exception incase of TCP level errors so more generic approaches to connection pooling can be implemented, and nest might come with a separate nuget package implementing one (if anything as reference). See also this discussion https://github.com/Mpdreamz/NEST/pull/224#issuecomment-16347889
Hope this helps for now.
UPDATE this answer is outdated NEST 1.0 ships with connection pool and cluster failover support out of the box: http://nest.azurewebsites.net/elasticsearch-net/cluster-failover.html
Can someone please tell me the pro's and con's of mod_jk vs mod_cluster.
We are looking to do very simple load balancing.. We are going to be using sticky sessions and just need something to route new requests to a new server if one server goes down. I feel that mod_jk does this and does a good job so why do I need mod_cluster?
If your JBoss version is 5.x or above, you should use mod_cluster, it will give you a better performance and reliability than mod_jk. Here you've some reasons:
better load balacing between app servers: the load balancing logic is calculated based on information and metrics provided directly by the applications servers (bear in mind they have first hand information about its load), in contrast with mod_jk with which the logic is calculated by the proxy itself. For that, mod_cluster uses an extra connection between the servers and the proxy (a part from the data one), used to send this load information.
better integration with the lifecycle of the applications deployed in the servers: the servers keep the proxy informed about the changes of the application in each respective node (for example if you undeploy the application in one of the nodes, the node will inform the proxy (mod_cluster) immediately, avoiding this way the inconvenient 404 errors.
it doesn't require ajp: you can also use it with http or https.
better management of the servers lifecycle events: when a server shutdowns or it's restarted, it informs the proxy about its state, so that the proxy can reconfigure itself automatically.
You can use sticky sessions as well with mod cluster, though of course, if one of the nodes fails, mod cluster won't help to keep the user sessions (as it would happen as well with other balancers, unless you've the JBoss nodes in cluster). But due to the reasons given above (keeping track of the server lifecycle events, and better load balancing mainly), in case one of the servers goes down, mod cluster will manage it better and more transparently to the user (the proxy will be informed immediately, and so it will never send requests to that node, until it's informed that it's restarted).
Remember that you can use mod_cluster with JBoss AS/EAP 5.x or JBoss Web 2.1.1 or above (in the case of Tomcat I think it's version 6 or above).
To sum up, though your use case of load balancing is simple, mod_cluster offers a better performance and scalability.
You can look for more information in the JBoss site for mod_cluster, and in its documentation page.