I am testing my WCF service with basichttp endpoint using Soap UI and fiddler2 inside the LAN and it works very fast for me but my client outside the LAN claims that It is 10 times slower than my response time inside the LAN.
- can you give me some hints how I can trace this problem? what could be the possible reason?
- Is there any unit test I can use to implement some calls as if they are outside LAN even different countries, continents?
thank you very much.
Related
I am using jmeter for tesing a website. My question is that is jmeter dependant on the internet connection? What if there is no internet connection? Will jmeter be able to send requests?
If you are running Jmeter locally, yes -- the overall load capacity of your test is entirely dependent on your Last Mile connection. But if you don't want to fork out the $$$ for your own OC-3, you can look to hosted Jmeter solutions providers -- basically Jmeter as a Service.
JMeter just captures the web requests and mimics the behavior, helping you automate the requests. As you are simulating web requests, you need connectivity to your web application. If you have your website hosted locally in that case you don't need internet connection, but you will surely need a way to reach the website wiz. LAN.
Again the results will vary based on the network bandwith, latency etc. If there is no internet connection and if your browser is not able to make request to your site, then JMeter will also not be able to reach the site and won't be able to make request.
I have WCF client on my WPF app. WCF client is generated with asynchronous operations.
I am doing parallel calls with awaiting to Tasks.
I noticed some delay on data getting and when sniffed traffic with Microsoft Message Analyzer, noticed, that for some calls I did 2 request were sent with about 500ms interval but got one response.
In my app I have only one call.
Question is why 2 underlying calls were sent by WCF client?
P.S. I checked by hosting service under IIS and IIS express. Same result on both cases.
Your issue here is not with your client or service, but with your analysis tooling.
Microsoft Message Analyzer is designed for low level network monitoring.
Higher level protocols such as SOAP will almost certainly utilise more than one network message per logical call.
WCF supports lower-level protocols such as UDP, where the number of messages on the network may bear more resemblance to the number of service calls you make, but this is buy no means garanteed.
As such, the service itself is the ultimate arbiter of how many logical service calls it has received.
If you do need to analyse the underlying traffic, you could also look at WCF Tracing, which will group network calls together into "conversations", that resolve to a single instance of a client-service request/response pair.
I'm trying to figure out how to do load testing on a long polling or web socket type of architecture.
I need to setup multiple clients which subscribe to channels on one side and wait for responses. The load testing should measure the time it took for messages from the publishing server to reach the clients.
Any ideas?
As said here,
SignalR uses a particular protocol to communicate so it's best that
you use our tool to generate load for testing your server(s).
So, SignalR comes with Crank. Crank can only connect to PersistentConnection based apps and not Hub based app.
This another answer could help you for Hub based app.
You can use crank, as referred above. One of the parameters is Transport, so you can specify only LongPolling:
crank.exe /Url:http://someUri /Transport:LongPolling
Use JMeter (https://jmeter.apache.org/) and flood with http connections with transport-type as header.
I developing an app which connects to my webserver. During development I have the webserver and phone emulator on the same machine.
How can I test how my app behaves when there's no network connectivity? Are there test hooks on the emulator? Should I use Fiddler to fake timeouts? I don't see any test hooks on the GetIsNetworkAvailable() call...
Thanks,
The approach I've used is to wrap the appropriate methods in my own NetworkService class, this lets me switch out the code with a Stub version during unit tests and integration tests on the emulator.
The NetworkInterface.NetworkInterfaceType offers a enumeration that contains the network currently servicing internet requests. It will return 'none' if there isn't a internet connection available. (unfortunately it doesn't provide health information on the nature of the connection available, so if you have poor coverage it will still return MobileBroadbandGSM)
You can find the full information on the NetworkInterfaceType enumeration here
The Performance Golden Rule from Yahoo's performance best practices is:
80-90% of the end-user response time
is spent downloading all the
components in the page: images,
stylesheets, scripts, Flash, etc.
This means that when I'm developing on my local webserver it's hard to get an accurate idea of what the end user will experience.
How can I simulate latency so that I can understand how my application will perform when I've deployed it on the web?
I develop primarily on Windows, but I would be interested in solutions for other platforms as well.
A laser modem pointed at the mirrors on the moon should give latency that's out of this world.
Fiddler2 can do this very easily. Plus, it does so much more that is useful when doing development.
YSlow might help you out. YSlow analyzes web pages based on Yahoo!'s rules.
Firefox Throttle. This can throttle speed (Windows only).
These are plugins for Firefox.
You can just set up a proxy outside that will tunnel traffic from your web server to it and then back to local browser. It would be quite realistic (of course it depends where you put the proxy).
Otherwise you can find many ways to implement it in software..
Run the web server on a nearby Linux box and configure NetEm to add latency to packets leaving the appropriate interface.
If your web server cannot run under Linux, configure the Linux box as a router between your test client machine and your web server, then use NetEm anyway
While there are many ways to simulate latency, including some very good hardware solutions, one of the easiest for me is to run a TCP proxy in a remote location. The proxy listens and then directs the traffic back to my final destination. On a remote server, I run a unix program called balance. I then point this back to my local server.
If you need to simulate for a just a single server request, a simple way is to simply make the server sleep() for a second before returning.