We have built a web services to serve map tiles like google map based on asp.net.
And The client require that the response time for 1000 concurrencies requests must be less than 1 seconds.
Now we use the loader balance hardware,We deploy the service to 4 servers using iis , then we use the loader balance hardware to distribute the requests to different server.
However someone suggest that we should not use the loader balance,since the browser request limits.
It is said that for a given domain,the number of the requests the browser can sent at the same time is limited(maybe 10 or more).
So we should make our client application request to different tiles server directly.
Now,I am confused,which is the right way?
It is said that for a given domain,the number of the requests the browser can sent at the same time is limited(maybe 10 or more)
This is only sort of true. Most browsers won't make more than a few request to the same domain. However, there is no set standard or defined limit and it is often configurable.
How do you know all your users will be accessing your service through the browser
What happens if you have 1000 concurrent users?
Use a load balancer
Related
I have a browser plugin which will be installed on 40,000 dekstops.
This plugin will connect to a backend configuration file available via https, e.g. http://somesite/config_file.js.
The plugin is configured to poll this backend resource once/day.
But there is only one backend server. So if 40,000 endpoints start polling together the server might crash.
I could think of randomize the polling frenquency from the desktop plugins. But randomization still does not gurantee that there will not be a overload at the server.
Is using websocket in this scenario solves the scalability issue?
Polling once a day is very little.
I don't see any upside for Websockets unless you switch to Push and have more notifications.
However, staggering the polling does make a lot of sense, since syncing requests for the same time is like writing a DoS attack against your own server.
Staggering doesn't necessarily have to be random and IMHO, it probably shouldn't.
You could start with a fixed time and add a second per client ID, allowing for ~86K connections in 24 hours which should be easy for any server to handle.
As a side note, 40K concurrent connections might not as hard to achieve as you imagine.
EDIT (relating to the comments)
Websockets vs. Server Sent Events:
IMHO, when pushing data (vs. polling), I would prefer Websockets over Server Sent Events (SSE).
Websockets have a few advantages, such as client side communication which allows clients to ping the server and confirm that the connection is still alive.
The Specific Use-Case:
From the description in the question and the comments it seems that you're using browser clients with a custom plugin and that the updates you wish to install daily might require the browser to be active.
This raises different questions that effect the implementation (are the client browsers open all day? do you have any control over the client browsers and their environment? can you guarantee installation while the browser is closed?).
...
IMHO, you might consider having the client plugins test for an update each morning as they load for the first time during that day (first access).
People arrive at work in different times and they open their browsers for the first time at different schedules. So the 40K requests you're expecting will be naturally scattered across that timeline (probably a 20-30 minute timespan).
This approach makes sure that the browsers and computers are actually open (making the update possible) and that the update requests are staggered over a period of time (about 33.3 requests per second, if my assumption is correct).
If you're serving a pre-written static configuration file (perhaps updated by the server daily), avoiding dynamic content and minimizing any database calls, than 33 req/sec should be very easy to manage.
I have a Spring-boot based microservice which is currently being hit from a Mobile-APP. Now we are developing a browser base client for the same microservice. Request & Response parameters between mobile-App and browser are same. Number of users in mobile-APP is around 10000 per second and for browser is around 20000 per second. Hence, there would be more than 30000 hits to this microservice each second.
We know that "Spring controllers are singletons (there is just one instance of each controller per web application) ".
Will it be a good approach (with respect of performance) to have two separate Controllers for this same microservice, one for mobile-App users and other for browser users ? Will it improve microservice performance by having two instances running in parallel ?
I am looking the best way to handle increasing number of hits through both the channels ?
Any suggestions would be highly appreciated.
When You have the same request and response, retrieved to browser and mobile clients there is no point in creating two diff controllers or services. Keep your app simple with one controller to do the job. With this your service just sees the mobile and web client in same way.
Whenever there is increase in load that has to be handled by the app, you can go for horizontal scaling, using a routing, load balancer service like zuul, nginx.
Just scale up/down the instances behind the load balancer according to the load you need to handle.
I am trying to monitor the time spent on server using WILY Introscope but i observe that the time mentioned in WILY for each of the servers is in the range of 100 to 1000 ms. But the time taken for a page to load in browser is almost 5 seconds.
Why is the tool reporting incorrect value ? how to get the complete time in WILY ?
time mentioned in WILY for each of the servers is in the range of 100
to 1000 ms. But the time taken for a page to load in browser is almost
5 seconds.
Reason is - In Browser, you see all the outgoing traffic from the browser. Ideally, any web page would contain 1 POST request followed by multiple GET requests. POST could be your text/html data while Get could be image, CSS, javascript etc.
Mostly these Get requests would be answered by the Web server and post request would be served by involving app server.
The time reported in WILY is only the time spent on server to serve the POST request. Your GET request calls will not be captured by WILY.
Why is the tool reporting incorrect value ? how to get the complete
time in WILY ?
Tool is not reporting incorrect value. Tool sits on a JVM ideally. So it monitors the activity of the JVM and provides the metrics. That is the expected behavior.
A page is a complex item, requiring parsing of the page contents and then requests to multiple servers/sources. So, your page load time will be made up request time for an individual component, processing time for the page parsing and javascript (depending upon virtual user type), requests for the page components, where they are served from, etc... Compare this to your Wily monitoring, which may only be on one of the tiers involved.
For instance, you may have static components being served from a CDN which has zero visibility in your Wily Model. You might also be looking at your app server when the majority of the time is spent serving static components off of a web server, which is oft ignored from a monitoring perspective. Your page could have third party components which are loading which get counted in the Loadrunner time, but do not get counted in the Wily time.
It all comes down a a question of sampling. It is very common for what you see in your deep diag tool to be a piece of the total page load, or an individual request which makes up a page where there are many more components to be loaded. If you want and even more interesting look then enable the w3c time-taken field in your web HTTP request logs and look to see the cost of every individual request. You can do this in the web layer of your app servers as well. Wily will then provide internal breakdown for those items which are "slow."
We would like to check every 3 seconds if there are any updates in our database, using jquery $.ajax. Technology is clear but are there any reasons why not to fire so many ajax calls? (browser, cache, performance, etc.). The web application is running for round about 10 hrs per day on every client.
We are using Firefox.
Ajax calls has implications not on client side(Browser,...) but on the server side. For example, every ajax call is a hit on server. ie. more bandwidth consumption, no of server request hit increases which in turn increases server load etc etc. Ajax call is actually meant to increase client friendliness at the cost of Server side implications.
Regards,
Ravi
You should think carefully before implementing infinite repeating AJAX calls with an arbitrary delay between them. How did you come up with 3 seconds? If you're going to be polling your server in this way, you need to reduce the frequency of requests to as low a number as possible. Here are some things to think about:
Is the data you're fetching really going to change that often?
Can your server handle a request every 3 seconds, how long does the operation take for a single request?
Could you increase the delay after inactivity or guess based on previous server responses how long the next delay should be?
Can you stop the polling completely when the window loses focus, and restart it when it's in the foreground again.
If a user opens the same page in a website 10 times, your server should recognise this and throttle its responses, either using a cookie with a unique value in it (recommended) or based on the client IP address.
Above all, instead of polling, consider using HTML 5 web sockets to "push" data to the client - most modern browsers support this. Several frameworks are available that will fall back to polling if web sockets are not available - one excellent .NET example is SignalR.
I've seen a lot of application making request each 5sec or so, for instance a remote control (web player) or a chat. So that should not be a problem for the browser to do so.
What would be a good practice is to wait an answer before making a new request, that means not firing the requests with a setInterval for instance.
(In the case the user lose its connection that would prevent opening too much connections).
Also verifying that all the calculations associated with an answer are done when receiving the next answer.
And if you have access to that in the server side, configure you server to set http headers Connection: Keep-Alive, so you won't add to much TCP overhead to each of your requests. That could speed up small requests a lot.
The last point I see is of course verifying that you server is able to answer that much request.
You are looking for any changes after each 3sec , In this way the traffic would be increased as you fetching data after short duration and continuously . It may also continuous increase the memory usage on browser side . As you need to check any update done in the database , you can go for any other alternatives like Sheepjax , Comet or SignalR . (SignalR generally broadcast the data to all users and comet needs license ) . Hope this may help you .
any easy to use utility/tool/profiler/benchmark that able to test what is the maximum users a web application able to support by analyzing session size , cpu speed, memory size..etc and 'PREDICT' when server is overpacked/overloaded?
Apache JMeter is a simple-to-use system that fires requests to your web server. You can try different numbers of users (i.e. sessions) with different numbers or types of requests. You could try increasing the number of users until the latency/speed of your system is unsatisfactorily slow.