Web Performance Test: SignalR - Unrecognized user identity - visual-studio-2013

When running a recorded Web Test using Visual Studio, initializing the signalr connection triggers the error.
Unrecognized user identity. The user identity cannot change during an
active SignalR connection.
Request:
GET /Computer/signalr/connect?
transport=foreverFrame&
connectionToken=xxx&
connectionData=yyy&
tid=7&
frameId=1 HTTP/1.1
User-Agent : Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)
Accept : */*
Accept-Language : en-GB
Accept-Encoding : GZIP
Host : test.host.com
Cookie : __RequestVerificationToken_L01XTS1NYXN0ZXI1=YCuMgJ7WD6QNtHnUvgM4EFvVJ5lllR477xjaMAzFogypdqXEFV054ygGy0Spnqwo3LJDbDHyzGudF8QdTRZW30zcBHGh8oI7CEj2L0k01Eg1
Response:
HTTP/1.1 403 Forbidden
Pragma : no-cache
Transfer-Encoding : chunked
X-Content-Type-Options : nosniff
Cache-Control : no-cache
Content-Type : text/html
Date : Wed, 03 Sep 2014 13:42:03 GMT
Expires : -1
Update:
Looks like the problem is reconciling a change in user status with an active connection.
If a user's authentication status changes while an active connection exists, the user will receive an error that states, "The user identity cannot change during an active SignalR connection."
In that case, your application should re-connect to the server to make sure the connection id and username are coordinated.
Not sure how to coordinate the username and connection id during the webtest.

I would suspect a dynamic parameter that has not been handled so a value provided by the server when test was recorded is being passed when the test is executed. The server is then detecting that a request is passing an unexpected value (ie the old value) and creating that message.
There are several web pages giving advice on debugging web performance tests. One technique is to record two versions of the test that are, as nearly as possible, identical. Then use a text comparison program to compare the two ".webtest" file. Sometimes recording a third test that logs in as a different user but otherwise is as nearly as possible identical to the other two test. Then comparing this third ".webtest" against the others. The comparison will, hopefully, indicate one or more dynamic parameters that had not previously been noted.

Related

Empty Request Body rejects request

I have a Asp.Net WebApi method (POST) that works in one IIS server but when deployed to another IIS server (both the servers are IIS 8), if the request body is kept empty I get a error returned as below :
Request Rejected. The requested URL was rejected. Please consult with your administrator.
If I put any character in the request body - I get the expected result ! Any idea what I can check ? I believe some settings in the server - as the service is working in to former server - its only the later server that doesn't accept an Empty Request Body !
Unsuccessful request
Successful request/response when entered some garbage character in request body !
Update
The response header for the unsuccessful response:
HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Connection: close
Content-Length: 188

Using Active Directory roles while accessing a website from JMeter

In our company the web app that we are testing uses the active directory roles assigned to the user for accessing the website.
Edit:
Important information that I forgot to mention is that, while accessing the website I am not prompted for the username and password. The website is only displayed if I have the correct Active Directory role assigned to my user profile.
For Example,
Opening IE as myself - able to access the website.
Opening IE as a service account (with required Active Directory roles) - able to access the website.
Opening IE as a different user outside my project - not able to access the website.
I have tried (skeptically, desperate to get it working) Basic/ Kerberos Authorization in the HTTP Authorization Manager and even running JMeter as that service account still no luck. I keep getting the below
Thread Name: Users 1-1
Sample Start: 2017-04-26 17:08:18 CDT
Load time: 83
Connect Time: 13
Latency: 83
Size in bytes: 438
Sent bytes:136
Headers size in bytes: 243
Body size in bytes: 195
Sample Count: 1
Error Count: 1
Data type ("text"|"bin"|""): text
Response code: 401
Response message: Unauthorized
Response headers:
HTTP/1.1 401 Unauthorized
Server: nginx/1.10.1
Date: Wed, 26 Apr 2017 22:08:18 GMT
Content-Type: text/html
Content-Length: 195
Connection: keep-alive
WWW-Authenticate: Negotiate
X-Frame-Options: deny
X-Content-Type-Options: nosniff
HTTPSampleResult fields:
ContentType: text/html
DataEncoding: null
I am just trying to find out if any one here has got the JMeter working in a similar scenario/ if any one can point me in the right direction to overcome this hurdle.
Thanks all for your help in advance.
You need to identify the exact implementation of the authentication in your application.
Given you receive WWW-Authenticate: Negotiate - this is definitely not Basic HTTP Auth.
Negotiate may stand either for NTLM or for Kerberos (or in some cases for both, i.e. if Kerberos is not successful it will fall back to NTLM) and JMeter needs to be configured differently for these schemes.
For example for NTLM you need to provide only credentials and domain in the HTTP Authorization Manager and for Kerberos you need to populate Realm and set your Kerberos settings (KDC and login config) under jaas.conf and krb5.conf files
See Windows Authentication with Apache JMeter article for more information and example configurations.

CakePHP: Problems with session, autoRegenerate, requestCountdown, AJAX

What I researched elsewhere
an answer in this question explains how to use autoRegenerate and requestCountdown to prolong the session as long as the user is active.
This question has an answer explaining what happens with ajax calls:
If you stay on the same page, JavaScript makes a request, which generates a new session_id, and doesn't record the new session_id.
All subsequent ajax requests use an old session_id, which is declared invalid, and returns an empty session.
Somewhere else it was said that some browsers send another userAgent with ajax requests, and Session.checkAgent has to be set to false if it has to be guaranteed that ajax calls work. but as those ajax calls only fail sometimes I don't think that this is the reason for the problem.
My problem is
I had set requestCountdown to 1, but then I received errors on pages that automatically perform ajax requests when the page is loaded. I increased requestCountdown to 4, which should be enough most of the times. But some users with some browsers receive error messages because one or more of the ajax calls receives a "403 Forbidden" as a response. For the same page, sometimes the error appears and sometimes not.
What I want is if the session length is e.g. 30 minutes and the user opens a page (or triggers an event that causes an ajax call) at let's says minute 29, the session should be prolonged for another 30 minutes.
But I seem to be stuck between two problems:
If the countdown is set to a value greater than 1 and the user happens to visit a page that doesn't contain any ajax requests, the countdown value is decreased only by 1, it doesn't become 0, and the session is not regenerated. E.g. if the countdown is set to 10 the user will have to click 10 times in order to regenerate the session.
If the countdown is set to one, the session will be regenerated with every request, but on some browsers sometimes some ajax calls will fail.
My questions
To assure that I am understanding it correctly: A session can not simply be prolonged, it has to be "regenerated", which implies that the session id is changed?
Maybe this is all conceptually correct but I wonder if I am just missing an additional setting or something to get it to work?
Exemplary request and response headers (from my test machine)
Request
-------
POST /proxies/refreshProxiesList/0 HTTP/1.1
Host: localhost:84
User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:27.0) Gecko/20100101 Firefox/27.0
Accept: */*
Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate
X-Requested-With: XMLHttpRequest
Referer: http://localhost:84/users/home
Cookie: CakeCookie[lang]=de; CAKEPHP=b4o4ik71rven5478te1e0asjc6
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
Content-Length: 0
Response
--------
HTTP/1.1 403 Forbidden
Date: Tue, 18 Feb 2014 10:24:52 GMT
Server: Apache/2.4.4 (Win32) OpenSSL/1.0.1e PHP/5.5.3
X-Powered-By: PHP/5.5.3
Content-Length: 0
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: text/html; charset=UTF-8
CakePHP uses sessions with cookies. It sounds to me like problem is that while the session itself can be prolonged through the timeout option, the session cookie cannot easily be prolonged, so you end up losing your session anyways. The people in that thread are suggesting to refresh the session in order for it to create a new cookie.
You could, as one person suggested, extend the life of the session cookie to be much longer, though the problem will still be there, it'll just be less obvious. Maybe you could write something yourself to resave the session cookie with a new expiration time? ...Though I haven't found mentions of people doing this, so maybe not.
Googling for information about cakephp and session cookie expiration, it seems that this is a known problem CakePHP Session updates but cookie expiry doesn't that people have made workarounds for.

AJAX call to internal server works in IE but not in other browsers

I'm calling a server site on our internal server. This domain looks like this:
http://server.domain:12345/x.html
Now, with IE this works just fine, I'm getting the data. (My problem there is that IE caches the website after the first call forever, but never mind).
Now, if I'm trying to do exactly the same in Firefox, it won't work, the same in Google Chrome.
Firebug says this:
Answer-Header
Connection Keep-Alive
Content-Length 109
Content-Type text/html; charset=UTF-8
Keep-Alive timeout=5000
Server AbWeb Version SRSG 1.34
Set-Cookie sessionkey=80da7dfe-1c9c-4460-9592-3ce55cecb379
Request-Header
Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7
Accept-Encoding gzip, deflate
Accept-Language de-de,de;q=0.8,en-us;q=0.5,en;q=0.3
Connection keep-alive
Host server.domain:12345
Origin http://otherserver.domain
Referer http://otherserver.domain/test/
User-Agent Mozilla/5.0 (Windows NT 6.1; WOW64; rv:5.0) Gecko/20100101 Firefox/5.0
Chrome says this:
X XMLHttpRequest cannot load http://server.domain:12345/x.html. Origin http://otherservere.domain is not allowed by Access-Control-Allow-Origin.
server.domain:12345/x.html
X Failed to load resource
It seems you perform cross-domain JavaScript calls. The target server must set the Access-Control-Allow-Origin HTTP header. In your case the server http://server.domain must set a header like:
Access-Control-Allow-Origin: http://otherserver.domain
I do not know why it works for IE, it may have to do with your security domains as your just working in the intranet.
See another example:
Jquery form doesn't show submission message on web server but it shows submission message on local host

Do we need the "Expect: 100-continue" header in the xfire request header?

I found the apache xfire has add one head parameter in its post header:
POST /testservice/services/TestService1.1 HTTP/1.1
SOAPAction: "testAPI" Content-Type: text/xml; charset=UTF-8
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; XFire Client +http://xfire.codehaus.org)
Host: 192.168.10.111:9082
Expect: 100-continue
Will this Expect: 100-continue make the roundtrip call between the xfire client and its endpoint server a little bit waste because it will use one more handshake for the origin server to return the "willing to accept request"?
This just my guess.
Vance
I know this is old question but as I was just researching the subject, here is my answer. You don't really need to use "Expect: 100-continue" and it indeed does introduce extra roundtrip. The purpose of this header is to indicate to the server that you want your request to be validated before posting the data. This also means that if it is set, you are committed to waiting (within your own timeout period - not indefinitely!) for server response (either 100 or HTTP failure) before sending your form or data. Although it seems like extra expense, it is meant to improve performance in failure cases, by allowing the server to make you know not to send the data (since the request has failed).
If the header is not set by the client, this means you are not awaiting for 100 code from the server and should send your data in the request body. Here is relevant standard: http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html (jump to section 8.2.3).
Hint for .NET 4 users: this header can be disabled using static property "Expect100Continue"
Hint for libcurl users: there was a bug in old version 7.15 when disabling this header wasn't working; fixed in newer versions (more here: http://curl.haxx.se/mail/lib-2006-08/0061.html)

Resources