I'm working on a mobile client. Dev backend server, I'm working with, isn't stable at all. It may be unusable for a full working day. Prod server is a bit better but still sometimes it doesn't work either. The other problem is it's much more difficult to use it in development. Besides that it's completely wrong to work like that. Basically these servers have been made for web, not for mobile. And it has other strange and annoying thing that destructs me from my primary work - token life time is only 60 seconds. That means if the app didn't refresh the token in that period the token dies. And next time you run the app you need to authorize from scratch. And that process takes centuries(((. May be I just don't understand how it works or something, but as I see web site just spams the sever every minute.
I was thinking how to fix this problem and started using mocking manually. But it's very annoying and time consuming either. The other idea is to use some kind of proxy / cache server that will send request to original server and if it fails return cached data. It seems that it may help in my situation. I'm not sure would such proxy / cache server be able to eliminate token problem. Basically I need to refresh it as soon as first token has been received. But who knows? May be I'm lucky enough?)
So the question: is there some simple to use proxy cache server that I will be able to run locally to achieve what I want?
The other opportunity is to write such proxy server myself. I have no experience in writing servers at all. But as a last chance I could try. The benefit of writing proxy server myself is that I should be able to "fix" token problem for sure. But I don't want to reinvent the wheel.
So any help and thoughts are appreciated.
Not entirely sure if this will solve your problem but let's give it a shot.
I myself have been programming against a rate-limited API. During development I often max out the allowed requests and have to wait before I could continue. I have developed a small caching proxy server that sits between your client and the server. It intercepts the requests and puts both the request and response in it's cache. Whenever it intercepts a request that it's already seen it will respond from cache without forwarding the request to the target server.
I'm not sure what your requests look like. The proxy that I build currently retrieves cache based on URL and HTTP Method, so that may or may not be what you need.
Here's the link to the GitHub repository: https://github.com/RobinvandenHurk/cache-proxy
Disclaimer: For if it wasn't clear, I am the author of this proxy
Related
I have to create a little AJAX chat in my web application and I'm dealing with problem of real-time communication between JavaScript client and PHP server.
I want my js client to be able to catch new messages from the server as quick as possible. My first idea was to create AJAX request for example each 5 sec. to see whether there are new messages.
However, I'm not sure what happens if my application use for example 1000 people, it must be huge load to Apache httpd.
I also know about technique called 'long-polling' request, but when I tried that locally on my server, I've completely shooted down my Apache (I've read sth about problems with apache and long-polling). The next way I know about is WebSocket.
However, is it true that I have to be able to open port on webserver to use it? Because on regular web hosting, I thing it's not possible and I cant change any Apache/PHP settings on my hosting.
Do you have any suggestions how to solve it?
If you want to use websockets, you better have full control over your server as you may be facing the need to start and stop the websocket daemon whenever it's needed.
I wouldn't recommend using "regular web hosting" because of its restrictions.
I think that you are looking for "virtual server providers", that provides you full control over the server you manage. You should look at Amazon Web Services. There are many others that you may find.
Some (rogue) ISPs may implement caching on their mobile network in order to reduce traffic on their connections. Some even don't tell their users.
Is there any standard way to defeat all caching mechanism in such cases and get sure to get fresh data when issuing a request on a web server ?
Thanks in advance.
POST requests usually travel unaltered and are not cached, but there's
a drawback to that when you need to investigate server logs and cannot
see query string params in the log. Another popular cache busting
technique is to append a random query string param to each request,
like ?ts=${timestamp}, which forces proxy servers to fetch content
from the origin servers.
In my opinion the best solution for that problems is to use SSL
whenever possible. This makes it impossible for ISPs to tamper with
requests and it is safe to assume that the communication is happening
directly between client and server (and it is possible to detect when
someone tries to hijack the encrypted connection).
Credit to Filip Wasilewski for bringing this to my attention.
I am managing a shop that forces HTTPs on the register/login/account/checkout pages, but that's it, and I've been trying to convince people to force HTTPs on everything.
I know that it's recommended to use HTTPs everywhere, but not sure why.
Are there any good reasons to keep part of the site on HTTP ?
One good reason is that page perfomance has a massive impact on sales (there's lots of published studies) and SSL has a BIG imact on performance - particularly if it's not tuned right.
But running a mixed SSL and non-SSL is full of pitfalls for the unwary...
Exactly which pages you put inside SSL has a big impact on security too though - suppose you send a login form using HTTP with a POST target which is HTTPS - a trivial analysis would suggest this is secure - but in fact an MITM could modify the login page to send the post elsewhere or inject some ajax to fork a request to a different location.
Further with mixed HTTP and HTTPS you've got the problem of transferring sessions securely - the user fills their session-linked shopping basket outside the SSL site, then pays for it inside the SSL site - how do you prevent session fixation problems in the transition?
Hence I'd only suggest running a mixed site if you've got really expert skills in HTTP - and since you're asking this question here, that rather implies you don't.
A compromise solution is to use SPDY. SPDY requires SSL but makes most sites (especially ones that have not been heavily performance optimized) much faster. Currently it's not supported by MSIE - and (last time I checked) is not enabled by default in Firefox. But it's likely to make up a large part of HTTP/2.0 any time soon.
Using (good) CDNs over HTTPS also mitigates much of the performance impact of SSL.
There's really no need to use HTTPS on the whole website. Using HTTPS will cause the server to consume more resources as it has to do extra work to encrypt and decrypt the connection, not to mention extra steps/handshake in negotiating algorithms etc.
If you have a heavy traffic website, the performance hit can be quite big.
This will also mean a slow response time then using plain on HTTP.
You should only really use HTTPS on the parts of the site you actually need to be secure, such as when ever the user send important information to your site, completes forms, logs in, private parts of the site etc.
One other issue can be if you use resources from none secure URLS, maybe images/scripts hosted elsewhere. If they are not available over HTTPS then your visitors will get a warning about an insecure connection.
You also need to realise the fact HTTPS data/pages will hardly ever get cached. this will also add a performance penalty.
I'm working on the design of a web app which will be using AJAX to communicate with a server on an embedded device. But for one feature, the client will need to get very frequent updates (>10 per second), as close to real time as possible, for an extended period of time. Meanwhile typical AJAX requests will need to be handled from time to time.
Some considerations unique to this project:
This data will be very small, probably no more than a single numeric value.
There will only be 1 client connected to the server at a time, so scaling is not an issue.
The client and server will reside on the same local network, so the connection will be fast and reliable.
The app will be designed for Android devices, so we can take advantage of any platform-specific browser features.
The backend will most likely be implemented in Python using WSGI on Apache or lighttpd, but that is still open for discussion.
I'm looking into Comet techniques including XHL long polling and hidden iframe but I'm pretty new to web development and I don't know what kind of performance we can expect. The server shouldn't have any problem preparing the data, it's just a matter of pushing it out to the client as quickly as possible. Is 10 updates per second an unreasonable expectation for any of the Comet techniques, or even regular AJAX polling? Or is there another method you would suggest?
I realize this is ultimately going to take some prototyping, but if someone can give me a ball-park estimate or better yet specific technologies (client and server side) that would provide the best performance in this case, that would be a great help.
You may want to consider WebSockets. That way you wouldn't have to poll, you would receive data directly from your server. I'm not sure what server implementations are available at this point since it's still a pretty new technology, but I found a blog post about a library for WebSockets on Android:
http://anismiles.wordpress.com/2011/02/03/websocket-support-in-android%E2%80%99s-phonegap-apps/
For a Python back end, you might want to look into Twisted. I would also recommend the WebSocket approach, but failing that, and since you seem to be focused on a browser client, I would default to HTTP Streaming rather than polling or long-polls. This jQuery Plugin implements an http streaming Ajax client and claims specifically to support Twisted.
I am not sure if this would be helpful at all but you may want to try Comet style ajax
http://ajaxian.com/archives/comet-a-new-approach-to-ajax-applications
I've seen some clients complaining about slowness of my website lately and I'm pretty sure that the problem is related to their network. I'd like to be able to justify this to myself more thoroughly and also be able to more proactively reach out to clients that appear to be having network issues before they come banging on my door.
If I was running ASP.Net I would try to use the Response.AppendToLog Method and append a token so that I could tie back everything back to my custom application level logging (user, client, processing time, etc.). I can't seem to find a way to do that without ASP.net. I'm guessing it's built into ASP's ISAPI. My requests are going through IIS to JRun's ISAPI to Coldfusion (.cfm/.cfc files).
I'm most interested in knowing how long it took the client to receive the content not just the time it took to process the request.
If there are other places/information that I'm not considering that's worth looking at, please let me know. Perhaps I should log information from HTTP.sys somehow?
I know that I could set a cookie on every request and have that logged by IIS, I was just hoping there would be a better solution.
Thanks for your thoughts!
See Jiffy. It "is an end-to-end real-world web page instrumentation and measurement suite."
The introductory video gives a good overview.