In the old RestKit 0.10 there was the guarantee, that all request and responses travel thru RKRequestQueue with the benefit of "managed request memory", "managed network load" (limit concurrent requests to 5), "managed request life cycle", "managed network availability" (including postponing request until the network is reachable).
With RestKit >= 0.20.0 the RKRequestQueue is not available any more.
Are those features provided by the old RKRequestQueue still valid for 0.20.0 and up? Is there a limit for concurrent requests? Is there a postponing feature until the network is reachable and if so who provides this?
This would now be managed by the AFHTTPClient that you can get from the RKObjectManager that you're using. You can get the operationQueue from the client to configure concurrency. You can also use setReachabilityStatusChangeBlock to be notified of network status changes and react to them.
Related
I have an Nginx proxy server. When an HTTP/2 request comes to the server and does not find anything in cache, the server makes an outbound request to the origin server using HTTP/1.1. Is there a performance degradation on the server when it converts from one version of the protocol to another? How does this compare to HTTP/1.1 to Nginx and HTTP/1.1 to the origin server? Is there a way to measure the overhead?
Strictly speaking there is performance degradation, since one protocol is binary, other one textural. So proxy must convert, that takes resources, time - you can expect degradation by default.
In general however that can be much more complicated. Say your proxy is used by slow mobile connection. Who cares about a bit of conversion if your app is gaining huge bust after that conversion? Or maybe your proxy had gzip conversion for http/1.1, and that speed gain is not that big, on the other hand maybe performance degradation is not that big?
Can you measure that? Perhaps. Question is what for? I would measure something as close to real case as possible. I would automate that measurement to see where real performance is.
My only warry in your case is CPU of the proxy - just measure it to see changes over time, and setup notifications - like "if cpu is over 80% for longer then 5 min".
Other than all of that. Http2 brings two ways communication, as well as push. My assumption is that it is not your case, since you are comparing 1.1 and 2. For me I would go with http2 everywhere, unfortunately nginx is not supporting http2 and backend side. Fingers crossed to see that soon!
Yes, going from HTTP2 to HTTP 1.1 degrades performance, primarily due to protocol-imposed transport conversions. For example, you lose the following transport optimizations:
Single connection
Request/response multiplexing
Header compression
Additionally, as Michal mentioned, HTTP 1.1 messages are textual while HTTP2 messages are binary.
HTTP2 multiplexes requests and responses over a single connection. However, HTTP 1.1 only affords persistent connections and request/response pipelining, which is not even comparable. For example, pipelining forces a FIFO order of message exchanges, which causes blocking.
To achieve any similar throughput levels, the proxy will have to open a connection pool to each backend. Those pools could be large or small, but considering resource allocations, TCP handshakes, TLS handshakes, etc., per connection and you start to get the idea of how much overhead we're talking about.
Measure the difference between "throughput in" on cache hits and "throughput out" on cache misses, e.g. "protocol conversion throughput penalty" is ~23 tps. (You should also know your average cache miss penalty in terms of time.)
Key metrics
Throughput in versus throughput out
Average cache miss penalty
Cache hit and cache miss ratios
Unless your cache miss ratio is high, I wouldn't worry about this.
I don't think their is a performance degrade. One way to measure the impact (since I can't test for you, you will have to do it) is to use AJAX, send a http/1.1 request and measure the response time. Then compare it to the time it takes to send http/2 requests. Do it multiple times.
That'll help you.
But, beware, their may be a potential security problem.
That is, their will be no point in even installing an SSL/TLS certificate. So is so because, the info that the NGINX server will send will then be open to hackers. Probably.
We are executing a test of Upload scenario where we are aware that the response time will be more than 5 minutes. Hence we have configured timeout in HTTP Request Defaults as well as in the Http request as 3600000 milliseconds. But still we are getting Socket Exception in Upload transaction . Could you please suggest how to handle this.
Thanks,
SocketException doesn't necessarily means "timeout", it indicates that JMeter is not able to create or access Socket connection, there are too many possible reasons, the most common are:
Network configuration of your server doesn't allow that many connections as you're trying to open, check the maximum number of open connections on your application server and operating system level.
Your application server is overloaded and cannot handle such a big load. Make sure it has enough headroom to operate in terms of CPU, RAM and especially Network metrics (these can be monitored using JMeter PerfMon Plugin)
You might be experiencing the behaviour described in JMeterSocketClosed article
Basically the same as points 1 and 2 but this time you need to check JMeter health, make sure you're following JMeter Best Practices and maybe even consider going for distributed testing
I'm using Laravel5 and, I want to create a notification system for my (web) project. What I want to do is, notifying the user for new notifications such as;
another user starts following him,
another user writes on his wall,
another user sends him a message, etc,
(by possibly highlighting an icon on the header with a drop-down menu. The ones such as StackOverflow).
I found out the new tutorials on Laracast: Real-time Laravel with Socket.io, where a kind of similar thing is achieved by using Node, Redis and Socket.io.
If I choose using socket.io and I have 5000 users online, I assume I will have to make 5000 connections, 5000 broadcastings plus the notifications, so it will make a lot of number of requests. And I need to start for every user on login, on the master blade, is that true?
Is it a bad way of doing it? I also think same thing can be achieved with Ajax requests. Should I tend to avoid using too many continuous ajax requests?
I want to ask if Socket.io is a good way of logic for creating such system, or is it a better approach to use Ajax requests in 5 seconds instead? Or is there any alternative better way of doing it? Pusher can be an alternative, however, I think free is a better alternative in my case.
A few thoughts:
Websockets and Socket.io are two different things.
Socket.io might use Websockets and it might fall back to AJAX (among different options).
Websockets are more web friendly and resource effective, but they require work as far as coding and setup is concerned.
Also using SSL with Websockets for production is quite important for many reasons, and some browsers require that the SSL certificate be valid... So there could be a price to pay.
Websockets sometimes fail to connect even when supported by the browser (that's one reason using SSL is recommended)... So writing an AJAX fallback for legacy or connectivity issues, means that the coding of Websockets usually doesn't replace the AJAX code.
5000 users at 5 seconds is 1000 new connections and requests per second. Some apps can't handle 1000 requests per second. This shouldn't always be the case, but it is a common enough issue.
The more users you have, the close your AJAX acts like a DoS attack.
On the other hand, Websockets are persistent, no new connections - which is a big resources issue - especially considering TCP/IP's slow start feature (yes, it's a feature, not a bug).
Existing clients shouldn't experience a DoS even when new clients are refused (server design might effect this issue).
A Heroku dyno should be able to handle 5000 Websocket connections and still have room for more, while still answering regular HTTP requests.
On the other hand, I think Heroku imposes an active requests per second and/or backlog limit per dyno (~50 requests each). Meaning that if more than a certain amount of requests are waiting for a first response or for your application to accept the connection, new requests will be refused automatically.... So you have to make sure you have no more than 100 new requests at a time. For 1000 requests per second, you need your concurrency to allows for 100 simultaneous requests at 10ms per request as a minimal performance state... This might be easy on your local machine, but when network latency kicks in it's quite hard to achieve.
This means that it's quite likely that a Websocket application running on one Heroku Dyno would require a number of Dynos when using AJAX.
These are just thoughts of things you might consider when choosing your approach, no matter what gem or framework you use to achieve your approach.
Outsourcing parts of your application, such as push notifications, would require other considerations such as scalability management (what resources are you saving on?) vs. price etc'
I recently programmed a scraper with Ruby's Mechanize gem for the first time. It had to hit the server (some 'xyz.com/a/number') where the number will be generated by the script. Like 'xyz.com/a/2' and 'xyz.com/a/3'.
It turned out that the first request took a lot of time -- around 1.5s on a 512kbps connection. But the next request was done in 0.3ms.
How could it be done so fast? Did it have some caching mechanism?
There are lots of possible sources for a speed change between requests. A few that immediately spring to mind:
DNS lookup cached on your client. The first call must convert "xyz.com" to "123.45.67.89", involving a DNS lookup which may be slow.
HTTP keep-alive. There is an initial conversation between client and server to start an HTTP data transfer. On a high-latency connection you will notice this. If server and client both respect HTTP keep-alive, then a connection can be established once to cover multiple requests.
Server-side caching. The server you are scraping uses caching to speed up multiple similar requests. It might be caching data to do with your current session for example, or even just not fully compiled the script yet until your first request.
Server-side VM resource allocation. If the server is sharing space on a virtualised system, and does not serve high traffic, then it may become more responsive after the first request ensures everything is in RAM and has CPU allocated.
This is by no means exhaustive. The above examples are just to illustrate that this behaviour - initial slow response, followed by faster ones - is very common for web services, and has multiple causes.
We would like to check every 3 seconds if there are any updates in our database, using jquery $.ajax. Technology is clear but are there any reasons why not to fire so many ajax calls? (browser, cache, performance, etc.). The web application is running for round about 10 hrs per day on every client.
We are using Firefox.
Ajax calls has implications not on client side(Browser,...) but on the server side. For example, every ajax call is a hit on server. ie. more bandwidth consumption, no of server request hit increases which in turn increases server load etc etc. Ajax call is actually meant to increase client friendliness at the cost of Server side implications.
Regards,
Ravi
You should think carefully before implementing infinite repeating AJAX calls with an arbitrary delay between them. How did you come up with 3 seconds? If you're going to be polling your server in this way, you need to reduce the frequency of requests to as low a number as possible. Here are some things to think about:
Is the data you're fetching really going to change that often?
Can your server handle a request every 3 seconds, how long does the operation take for a single request?
Could you increase the delay after inactivity or guess based on previous server responses how long the next delay should be?
Can you stop the polling completely when the window loses focus, and restart it when it's in the foreground again.
If a user opens the same page in a website 10 times, your server should recognise this and throttle its responses, either using a cookie with a unique value in it (recommended) or based on the client IP address.
Above all, instead of polling, consider using HTML 5 web sockets to "push" data to the client - most modern browsers support this. Several frameworks are available that will fall back to polling if web sockets are not available - one excellent .NET example is SignalR.
I've seen a lot of application making request each 5sec or so, for instance a remote control (web player) or a chat. So that should not be a problem for the browser to do so.
What would be a good practice is to wait an answer before making a new request, that means not firing the requests with a setInterval for instance.
(In the case the user lose its connection that would prevent opening too much connections).
Also verifying that all the calculations associated with an answer are done when receiving the next answer.
And if you have access to that in the server side, configure you server to set http headers Connection: Keep-Alive, so you won't add to much TCP overhead to each of your requests. That could speed up small requests a lot.
The last point I see is of course verifying that you server is able to answer that much request.
You are looking for any changes after each 3sec , In this way the traffic would be increased as you fetching data after short duration and continuously . It may also continuous increase the memory usage on browser side . As you need to check any update done in the database , you can go for any other alternatives like Sheepjax , Comet or SignalR . (SignalR generally broadcast the data to all users and comet needs license ) . Hope this may help you .