LoadRuner CPU is extremely high with TLS enabled - performance

I have load runner app which is running some flow. Connection to my app is over https. Problem is that CPU is extremely high when TLS is enabled. I tried to configure keep alive setting and increase connection timeout - it didn't help.
Is there any way to disable handshake between load runner and my app, because it is not something I am testing. For example when i use "curl" I could use flag --insecure. I didn't sound such configuration in load runner.

Which CPU?
CPU on Server where TLS handshake is taking place?
CPU on load generator host, running exclusive of the controller, for a single virtual user and above?
CPU on combined controller, load generator for all virtual users?
As to disabling? It is recommended that your load use the same mechanism as users. If users connect insecurely in production, then by all means use the same HTTP (vs HTTPS) connection leveraged by end users. Otherwise the use of resources on your server infrastructure will be very different than production. This will make your test both less predictive and of lower diagnostic value for bottlenecks in resource utilization.

Related

Load balancer and WebSockets

Our infrastructure is composed by
1 F5 load balancer
3 nodes
We have an application which uses websockets, so when a user goes to our site, it opens a websocket to the balancer which it connects to the first available node, and it works as expected.
Our truobles arrives with maintenance tasks, when we have to update our software, we need to turn offline 1 node at a time, deploy the new release and then turn it on again. Doing this task, the balancer drops the open websocket connections to the node and the clients retries to connect after few seconds to the first available nodes, creating an inconvenience for the client because he could miss a signal (or more).
How we can keep the connection between the client and the balancer, changing the backend websocket server? Is the load balancer enough to achieve our goal or we need to change our infrastructure?
To avoid this kind of problems I recommend to read about the Azure SignalR. With this you don't need to thing about stuff like load balancer, redis backplane and other infrastructures that you possibly need to a WebSockets connection.
Basically the clients will not connected to your node directly but redirected to Azure SignalR. You can read more about it here: https://learn.microsoft.com/en-us/azure/azure-signalr/signalr-overview
Since it is important to your application to maintain the connection, I don't see how any other way to archive no connection drop to your nodes, since you need to shut them down.
It's important to understand that the F5 is a full TCP proxy. This means that the F5 is the server to the client and the client to the server. If you are using the websockets protocol then you must apply a websockets profile to the F5 Virtual Server in order for the websockets application to be handled properly by the Load Balancer.
Details of the websockets profile can be found here: https://support.f5.com/csp/article/K14754
If a websockets and an HTTP profile are applied to the Virtual Server - meaning that you have websockets and web traffic using the same port and LB nodes - then the F5 will allow the websockets traffic as passthrough. Also keep in mind that if this is an HTTPS virtual sever that you will need to ensure a client and server side HTTPS profile (SSL offload) are applied to the Virtual Server.
While there are a variety of ways that you can fiddle with load balancers to minimize the downtime caused by a software upgrade, none of them solve the problem, which is that your application-layer protocol seems to not tolerate some small network outages.
Even if you have a perfect load balancer and your software deploys cause zero downtime, the customer's computer may be on flaky wifi which causes a network dropout for half a second - or going over ethernet and someone reconfigures some routing on their LAN, etc.
I'd suggest having your server maintain a queue of messages for clients (up to some size/time limit) so that when a client drops a connection - whether it be due to load balancers/upgrades - or any other reason, it can continue without disruption.

SSE support in big IP f5 load balancer

I am using SSE to push notification to client. The articture for my dataservices is as follows:
Client -> API Gateway(Spring cloud api gateway) -> f5(loadBalancer) -> (nginx) ->dataservice
When the load balancer is out of the picture, my notification works perfect but when I introduce f5 load balancer, it does not work and connection breaks.
Does f5 load balancer support long lived http connection? What configuration should I do to make it work.
Your question is unclear if it doesn't work at all, or if it stops working after a while (and then how long ?)
I suppose your F5 VS (Virtual Server) is of type Standard.
First, we can check if the HTTP Profile is in any way guilty. If your Virtual Server type is Standard virtual server with Layer 7 functionality, change it if possible to Standard by removing the HTTP Profile (and maybe some other profiles, such as caching..). You also can try Performance Layer4 type. Is it solving the issue ? If yes, we need to identify where the problem is, probably in the HTTP Profile or in a timeout setting as described below.
Check the HTTP Profile configured for your VS, at the Response Chunking option and set it to Preserve. See LTM HTTP Profile Option: Response Chunking if you need more details.
Check both Server and Client TCP Profiles related to your VS, their Time Wait option should be Indefinite if you suspect a timeout issue. There are other ways to solve a timeout, I'm just giving one of them. See K70025261 if you need more details.
As you're running SSE, you should probably disable Delayed Acks (enabled by default) and Nagle's Algorithm (disabled by default), as they can make your notifications slower. They're also both at the TCP Profile screen.
To answer the question:
YES, F5 supports SSE as I was able to make it work with some configuration tweeks. I cannot paste the configuration snapshot here, but in summary, turning off the **HTTP compression** property seemed to have done the trick for my case.

Why should we use IP spoofing when performance testing?

Could anyone please tell me what is the use of IP spoofing in terms of Performance Testing?
There are two main reasons for using IP spoofing while load testing a web application:
Routing stickiness (a.k.a Persistence) - Many load balancers use IP stickiness when distriuting incoming load across applications servers. So, if you generate the load from the same IP, you could load only one application server instead of distributing the load to all application servers (This is also called Persistence: When we use Application layer information to stick a client to a single server). Using IP spoofing, you avoid this stickiness and make sure your load is distributed across all application servers.
IP Blocking - Some web applications detect a mass of HTTP requests coming from the same IP and block them to defend themselves. When you use IP spoofing you avoid being detected as a harmful source.
When it comes to load testing of web applications well behaved test should represent real user using real browser as close as possible, with all its stuff like:
Cookies
Headers
Cache
Handling of "embedded resources" (images, scripts, styles, fonts, etc.)
Think times
You might need to simulate requests originating from the different IP addresses if your application (or its infrastructure, like load balancer) assumes that each user uses unique IP address. Also DNS Caching on operating system of JVM level may lead to the situation when all your requests are basically hitting only one endpoint while others remain idle. So if there is a possibility it is better to mimic the requests in that way so they would come from the different addresses.

TLS session resumption with HAproxy load balancer

After configuring application to work with TLS CPU consumption has got up to 10%.
I suppose it is because of TLS Handshake that happens every time.
On standalone environment I don't have such an effect. But when I am trying to use HAProxy LB it seems to me that session is cached for one node however when request came to another it need to perform handshake again.
How can I configure LB or tune it in order to avoid extra handshakes?
Tried to increase session cache it does't help.
tune.ssl.cachesize
tune.ssl.lifetime

OpenFire, HTTP-BIND and performance

I'm looking into getting an openfire server started and setting up a strophe.js client to connect to it. My concern is that using http-bind might be costly in terms of performance versus making a straight on XMPP connection.
Can anyone tell me whether my concern is relevant or not? And if so, to what extend?
The alternative would be to use a flash proxy for all communication with OpenFire.
Thank you
BOSH is more verbose than normal XMPP, especially when idle. An idle BOSH connection might be about 2 HTTP requests per minute, while a normal connection can sit idle for hours or even days without sending a single packet (in theory, in practice you'll have pings and keepalives to combat NATs and broken firewalls).
But, the only real way to know is to benchmark. Depending on your use case, and what your clients are (will be) doing, the difference might be negligible, or not.
Basics:
Socket - zero overhead.
HTTP - requests even on IDLE session.
I doubt that you will have 1M users at once, but if you are aiming for it, then conection-less protocol like http will be much better, as I'm not sure that any OS can support that kind of connected socket volume.
Also, you can tie your OpenFires together, form a farm, and you'll have nice scalability there.
we used Openfire and BOSH with about 400 concurrent users in the same MUC Channel.
What we noticed is that Openfire leaks memory. We had about 1.5-2 GB of memory used and got constant out of memory exceptions.
Also the BOSH Implementation of Openfire is pretty bad. We switched then to punjab which was better but couldn't solve the openfire issue.
We're now using ejabberd with their built-in http-bind implementation and it scales pretty well. Load on the server having the ejabberd running is nearly 0.
At the moment we face the problem that our 5 webservers which we use to handle the chat load are sometimes overloaded at about 200 connected Users.
I'm trying to use websockets now but it seems that it doesn't work yet.
Maybe redirecting the http-bind not via Apache rewrite rule but directly on a loadbalancer/proxy would solve the issue but I couldn't find a way on how to do this atm.
Hope this helps.
I ended up using node.js and http://code.google.com/p/node-xmpp-bosh as I faced some difficulties to connect directly to Openfire via BOSH.
I have a production site running with node.js configured to proxy all BOSH requests and it works like a charm (around 50 concurrent users). The only downside so far: in the Openfire admin console you will not see the actual IP address of the connected clients, only the local server address will show up as Openfire get's the connection from the node.js server.

Resources