If request sent through jmeter, in glassfish clustering requests are not segregating to different servers - jmeter

For the application server set as clustering in glass fish. I have sent request through jmeter and all the requests hits to only one server . Expected was requests should be distributed to multiple servers in the cluster. But if sent requests manually clustering is working. Please help to sort out this issue

There could be different clustering load balancing mechanisms, as far as I can see from the GlassFish Server High Availability Administration Guide:
Cookie Method
The Loadbalancer Plug-In uses a separate cookie to record the route information. The HTTP client (typically, the web browser) must support cookies to use the cookie based method. If the HTTP client is unable to accept cookies, the plug-in uses the following method.
Explicit URL Rewriting
The sticky information is appended to the URL. This method works even if the HTTP client does not support cookies. To implement explicit URL rewriting, the application developer must use HttpResponse.encodeURL() and encodeRedirectURL() calls to ensure that any URLs in the application have the session information appended to them.
So depending on your Load Balancer configuration you need to
Either define either different cookies in the HTTP Cookie Manager
Or make sure different threads send requests to different URLs i.e. via HTTP URL Re-writing Modifier
In any case it is recommended to add DNS Cache Manager so each virtual user would resolve the underlying IP address of the application under test on its own.

Related

how do web application sessions work when running on more than one server?

This is a general question based on how web sessions work across multiple servers, my knowledge around web sessions is not very deep but afaik a web session is typically stored directly in memory of the running web server application so when a request comes in it doesn't have to make database requests to fetch the session data. If a popular website needs multiple servers to handle the level of traffic it is receiving, when a request comes in I assume that it could get directed to any of the servers by some load balancer, but how does the server handling that request get the associated session data if the previous request was handled by a different server? do multi server sites require special session handling infrastructure, or do the load balancers know some how to route requests from the same client to the same server?
This question on ServerFault is the same as this question. with a good answer. in overview there are 3 common methods:
Session information stored in cookies only
Load balancer always directs user to the same machine
Shared backend database or key/value store.
See link for more indepth details of each.

How does the proxy mechanism work with proxy settings in browser

We often find columns like Address, Port in web browser proxy settings. I know when we use proxy to visit a page, the web browser request the web page from the proxy server, but what I want to know is how the whole mechanism works? I have observed that many ISP allow only access to a single IP(of their website) after we exhausted our free data usage. But when we enter the site which we wants to browse in proxy URL and then type in the allowed IP, the site get loaded. How this works?
In general, your browser simply connects to the proxy address & port instead of whatever IP address the DNS name resolved to. It then makes the web request as per normal.
The web proxy reads the headers, uses the "Host" header of HTTP/1.1 to determine where the request is supposed to go, and then makes that request itself relaying all remaining data in both directions.
Proxies will typically also do caching so if another person requests the same page from that proxy, it can just return the previous result. (This is simplified -- caching is a complex topic.)
Since the proxy is in complete control of the connection, it can choose to route the request elsewhere, scrape request and reply data, inject other things (like ads), or block you altogether. Use SSL to protect against this.
Some web proxies are "transparent". They reside on a gateway through which all IP traffic must pass and use the machine's networking stack to redirect outgoing connections to port 80 to a local port instead. It then behaves the same as though a proxy was defined in the browser.
Other proxies, like SOCKS, have a dedicated protocol that allows non-HTTP requests to be made as well.
There are 2 types of HTTP proxies, there are the ones that are reversed and the ones that
are forward.
The web browser uses a forward proxy, basically it is sending all http traffic through the proxy, the proxy will take this traffic out to the internet. Every http packet that comes out from your computer, will be send to the proxy before going to the target site.
The ISP blocking does not work when using a proxy because, every packet that comes out from your machine is pointing to the proxy and not to the targe site. The proxy could be getting internet through another ISP that has no blocks whatsoever.

FlashAttributes are not working properly when application deployed on a clustered environment

I am using redirectAttributes to pass success or failure messages to redirected url. So that I can show the success or failure message on the redirected page only once. If the same page is refreshed again then the message will not come up again. This is ok and working fine in normal deployment on tomcat.
Now we have setup a clustered environment where we have deployed the web application. But in this case the redirectAttributes are working weirdly. Sometimes it works and sometimes not.
Following is the line of code I am using to add flashAttribute to the redirect attributes.
redirectAttributes.addFlashAttribute("successMsg", message);
I am using Spring 3.1.0.RELEASE version and Tomcat 7 for clustered environment.
I want to know whether there is any workaround for this problem. Does any newer Spring version supports the use of redirectAttributes in clustered environment?
Also you can let me know if there is another way to perform this kind of stuff.
Thanks in advance.
It sounds like your HTTP sessions for a client may not be shared between the Tomcat servers. Since the Spring Flash attributes are stored in the session, you may be experiencing the following:
Initial request goes to serverA, and a flash attribute is set in the session on serverA
A redirect occurs, and the request gets sent to serverB. serverA and serverB have different HTTP sessions for the user (assuming you have no mechanism to share them), so serverB does not see the flash attribute (it has it's own separate HTTP session)
You could experience this problem intermittently if the server to which a client request gets sent is non-deterministic. For example, if both of the above described requests happen to go to serverA, then the flash attribute would work correctly, since the session would be the same.
If this is the case, then you need a mechanism to either:
Provide a 'sticky' session -- guarantee that all requests for a given client get routed to the same Tomcat server. Usually this is accomplished via a load-balancer / routing mechanism (example: nginx ip hash routing)
Implement session replication -- make sessions shared across all Tomcat servers, so that regardless of which Tomcat serves the request for a client, the HTTP session will be the same.

Is HTTPS Stateful or Stateless?

I want a bit of clarity on whether HTTPS is stateful or stateless? This is with regards to a RESTful API I built. We were initially using HTTP. Since HTTP essentially works over TCP/IP which is stateless hence HTTP is stateless, but when I switched to HTTPS my API became stateful. I wanted to know whether my conclusion that HTTPS is stateful. is correct or not?
I created my API using a middleware tool called webMethods.
Thanks
TLS/SSL is stateful. The web server and the client (browser) cache the session including the cryptographic keys to improve performance and do not perform key exchange for every request.
HTTP 1 is not stateful. HTTP/2 however defines many stateful components, but the "application layer" still remains stateless.
TL;DR: The transport pipe (TLS) is stateful, original HTTP is not.
Additional note: Cookies and other stateful mechanisms are later additions defined in separate RFC's. They are not part of the original HTTP/1.0 specification, although other stateful mechanisms like caching and HTTP auth are defined HTTP 1.1 RFC and RFC 2617. HTTP 1 is said to be stateless although in practice we use standardized stateful mechanisms. HTTP/2 defines stateful components in its standard and is therefore stateful. A particular HTTP/2 application can use a subset of HTTP/2 features to maintain statelessness.
Theory aside, in practice you use HTTP statefully in your everyday life.
The S in HTTPS is concerned with the transport, not the protocol. The semantics of the HTTP protocol remain the same for HTTPS. As the article about HTTPS on Wikipedia states,
Strictly speaking, HTTPS is not a separate protocol, but refers to use of ordinary HTTP over an encrypted SSL/TLS connection.
And the HTTP protocol is stateless by design, not because it is used most frequently over TCP/IP (nothing stops you to use HTTP over UDP for example).
HTTPS is HTTP over a secure connection.
HTTP is a higher level than a connection.
When connecting to a web server, your connection is (maybe always?) of type TCP/IP. So, in case you are visiting a website via HTTPS, your TCP/IP connection is encrypted.
The data the server and/or client send has not been encrypted by the server and/or client. It is just sent, as it is usually via HTTP, but this time using a connection via TCP/IP that is secured via encryption.
If data were vehicles, and the connexion the highway, then:
- using HTTP would be like the vehicles going on the highway, and everyone can see them;
- using HTTPS would be like the same, but the vehicles go through a tunnel or anything that prevents people not on the highway from seeing them. You can determine there is trafic, but you cannot identify the vehicles, except on both ends of the tunnel.
I believe this is an image close to what happens behind the scene. But I'm no expert. I just hope it helps.
HTTP and HTTPS both are stateless protocols. The S in HTTPS stands for Secure and it refers to use of ordinary HTTP over an encrypted SSL/TLS connection.
Use of JWT tokens or the traditional way of establishing sessions using cookies help us to overcome the problem of HTTP being a stateless protocol, as it enables the server to authenticate the identity of the client, so that you don't need to login every time you click a link to navigate on the web-page.
So For example, when you log in to the website of your bank, it only asks you to enter your login details once. Once you are signed in, you don't need to re-enter them when you navigate to the account settings page, this is because the bank site is able to authenticate your identity using JWT tokens.
JWT tokens are only used on HTTPS and not in HTTP, because the connection is encrypted in HTTPS, so it cannot be intercepted by anyone.
Thus, HTTP and HTTPS both are stateless protocols, but JWT Tokens provides a workaround for it.
I believe HTTPS is a stateful protocol as it contains Session identifier field.This generated by server initially to identify a session with the chosen client.

How to determine uniqueness of clients from http requests?

I notice that when http requests are made from clients through a proxy server, then the IP address of the requests is always that of the proxy. So if many clients from a huge corporation with a proxy server access a web site, I cannot tell if the requests are from unique clients or not. Is there any way to determine uniqueness of clients if the http requests are through a proxy? I know that the mac address is not included in the http request, so I have just about ruled that out.
The simplest way would be to set a cookie on the response, and check it in the request. If it's there, then you've seen that client before (and you could include some identification in the cookie). Of course, this relies on the clients being cookie-aware and the user not having disabled cookies (or clearing them manually).
There's also the issue of some clients which may be cookie-aware, but will effectively start from scratch each time - for instance, if someone's running a program to scrape your site, it will probably start with a fresh cookie jar each time, no matter how you set the cookie.
Provide a cookie to each new user with a GUID. You can track that and even include the GUID in your server logs.
We do this with our public web server to track "unique paths" through our site.

Resources