Load testing a web app which has a load balancer - jmeter

I wrote a Jmeter test (that uses different user credentials) to load test a web app which has a load balancer and all it forwards the requests to a single node. How can I solve this?
I used the DNS Cache manager but that did not work.
Are there any other tools which I could use? (I looked into AWS Load testing but that too won't work because all the containers would get the same set of user credentials and when parallel tests are run they would fail.)

It depends on the load balancing mechanism used in your load balancer, it might be the case it's looking into the source IP address and forwarding requests from the same IP to the same backend node. You can try using multiple IP addresses (or aliases) and see whether it makes the difference. See IP Spoofing With JMeter: How to Simulate Requests from Different IP Addresses article for more details.
Also adding DNS Cache Manager might be not sufficient, you can try configurign a custom DNS resolver, i.e. 1.1.1.1 as the DNS server so each thread would resolve the underlying IP address on its own

Related

Best way to load test application under same machine

I've used Gatling and Siege to load test my application. However, at certain points (especially when my load is higher), I would get a lot of gateway and requestTimeoutException errors. Since the requests doesn't seems to even get to the app, I presume the issue is to be my IP address being blocked due to the influx of traffic from 1 IP address. How do you overcome this? I'm assuming that the users that Gatling and Siege create to send concurrent requests are all under the same IP of my machine?
This is not possible for Gatling, the relevant feature request has been closed, you might want to consider using Apache JMeter instead, JMeter's HTTP Request sampler has "Source IP" field where you can put the needed IP address or alias
More information: Using IP Spoofing to Simulate Requests from Different IP Addresses with JMeter

How should map multiple requests to the same physical server through my load balancer if the source IP is constant

In my use case, an outside application would be making multiple requests to my application for different users and I need to relay all the requests for a particular user to same physical server through my load balancer.
Thought of using sticky session, but since the originating address would be common for all the requests not sure how should I go about it ?

Why should we use IP spoofing when performance testing?

Could anyone please tell me what is the use of IP spoofing in terms of Performance Testing?
There are two main reasons for using IP spoofing while load testing a web application:
Routing stickiness (a.k.a Persistence) - Many load balancers use IP stickiness when distriuting incoming load across applications servers. So, if you generate the load from the same IP, you could load only one application server instead of distributing the load to all application servers (This is also called Persistence: When we use Application layer information to stick a client to a single server). Using IP spoofing, you avoid this stickiness and make sure your load is distributed across all application servers.
IP Blocking - Some web applications detect a mass of HTTP requests coming from the same IP and block them to defend themselves. When you use IP spoofing you avoid being detected as a harmful source.
When it comes to load testing of web applications well behaved test should represent real user using real browser as close as possible, with all its stuff like:
Cookies
Headers
Cache
Handling of "embedded resources" (images, scripts, styles, fonts, etc.)
Think times
You might need to simulate requests originating from the different IP addresses if your application (or its infrastructure, like load balancer) assumes that each user uses unique IP address. Also DNS Caching on operating system of JVM level may lead to the situation when all your requests are basically hitting only one endpoint while others remain idle. So if there is a possibility it is better to mimic the requests in that way so they would come from the different addresses.

how to distribute requests of JMeter test plan among 2 nodes which are behind the Load balancer, Both node's IP is not public IP?

I am using JMeter 3.1
We have a load balancer with public IP (i.e: 192.87.00.00) having SSL implemented and we use that IP to communicate with LB and
LB will decide which node has currently least number requests so it will get that call.
Behind LB there are 2 nodes with non-public IP and non-secure protocol and in both nodes we implement session replication.
Whenever i run my JMeter Test then all my request went to any single node every time as per the configuration settings of LB. Now i
have been asked to design a test plan in which all requests distributed among both nodes randomly.
I created following test:
Test Plan
DNS Cache Manager
HTTP Cookie Manager
HTTP Cache Manager
Thread Group
Req 1
Req 2
Req 3
Test Plan and DNS Cache Manager
TG and HTTP request
In the http request i put the Load balancer Public IP, Port and select "httpClient4" in Implementation dropdown.
In the DNS Cache Manager i select "Use custom DNS resolver" and in DNS Server section i define IP addresses of both Nodes.
When i run my test plan i noticed that all my requests are goes to single node. i verified this from tailing both nodes tomcat
log in a putty console and to see which node is getting the request.
I study the DNS Cache Manager in Apache JMeter help and some blogs, i implement what i learnt please help me in this regard.
thanks!
The keep-alive is on in your HTTP Sampler. Turn it off.
I don't know what LB you're using, though I presume it works on TCP level rather then terminate HTTP(S) there.
So in this case, it just tunnels the packets to the actual servers. And with keep-alive, it obviously sticks to whatever one it choose at the beginning.

Getting (non-HTTP) Client IP with load-balancer

Say I want to run something like the nyan cat telnet server (http://miku.acm.uiuc.edu/) and I need to handle 10,000 concurrent connections total. I have 10 servers in addition to a load balancer. Each server can handle 1,000 concurrent connections, and I want to put a load balancer in front of it to randomly divide the traffic to the 10 servers.
From what I've read, it's fairly simple for a load balancer to pass an HTTP request (along with the client IP) to the backend server, perhaps with FastCGI or with an X- header.
What would be the simplest way for the load balancer to pass the client IP to the backend server in this case with a simple TCP server? Would a hardware load balancer be needed, or are there ways to do this simply through software?
In other words, is there a uniform way to pass client IP when load balancing for non-HTTP stuff? The same way Google gets client IP when they load-balances Google Talk XMPP server or their Gmail IMAP server
This isn't for anything in specific; I'm just curious about if and how it can be done. Thanks in advance!
The simplest way would be for the load balancer to make itself completely invisible and pass the connection on with the source and destination IP address unmolested. For this to work, the same IP address must be assigned (as a loopback address, not to a physical interface) to all 10 servers and that would be the IP address the clients connect to. Internet traffic to that IP address has to go to the load balancer. The load balancer must be the default gateway for the servers.

Resources