Is it better to use CORS or nginx proxy_pass for a RESTful client-server app? - proxy

I have a client-server app where the server is a Ruby on rails app that renders JSON and understands RESTful requests. It's served by nginx+passenger and it's address is api.whatever.com.
The client is an angular js application that consumes these services (whatever.com). It is served by a second nginx server and it's address is whatever.com.
I can either use CORS for cross subdomain ajax calls or configure the client' nginx to proxy_pass requests to the rails application.
Which one is better in terms of performance and less trouble for developers and server admins?

Unless you're Facebook, you are not going to notice any performance hit from having an extra reverse proxy. The overhead is tiny. It's basically parsing a bunch of bytes and then sending them over a local socket to another process. A reverse proxy in Nginx is easy enough to setup, it's unlikely to be an administrative burden.
You should worry more about browser support. CORS is supported on almost every browser, except of course for Internet Explorer and some mobile browsers.
Juvia uses CORS but falls back to JSONP. No reverse proxy setup.

Related

Getting around https mixed content issues?

I have an https site that needs data from an API that is only available in http.
To get around the mixed content warning, I changed it so the JS requests a path on the server, which then makes the http request and returns the data.
Is this bad? If it is bad, why?
My understanding of what you're doing :
You are providing a HTTPS url on your server which is essentially acting as a proxy, making a backend connection between your server and the API provider over HTTP.
If my understanding of what you're doing is correct, then what you're doing is better than just serving everything over HTTP...
You are providing security between the client and your server. Most security threats that would take advantage of a plain HTTP connection are in the local environment of the client - such as on a shared local network. Dodgy wifi in a cafe. School lans. etc.
The connection between your server and the API provider is unencrypted but apparently they only provide that unencrypted anyway. This is really the best you can do unless your API provider starts providing an HTTPS interface.
It's more secure than doing nothing and should eliminate the browser errors.
If there is a real need for security (PCI compliance, HIPAA etc) however, you should stop using that API. However it seems unlikely considering the circumstantial evidence in your question.

How can I detect, server side, if the client supports SPDY?

How can I detect, server side, if the client supports SPDY?
I want my website to be as fast as possible. Here's my thinking: (Note: my website does not need to transmit sensitive data.) If a browser connects to my website with HTTPS but it doesn't support SPDY , it'll be a waste. Unnecessary overhead due to HTTPS right? On the other hand, if the browser connects via HTTP and does support SPDY, that will be a missed opportunity.
It looks like NPN is the technology that the client and server use to negotiate on SPDY or not. That happens in the web server, before it ever hits my application code, right? I suppose then what I'd really need is a modified version of NPN (not even sure if that's really its own thing outside of SPDY) or mod_spdy. Ideally such a version would have an option called use_spdy_if_available_otherwise_redirect_to_http. :-)
Oh, and if all this isn't complicated enough, I'm currently using Cloudflare's CDN service. I'm pretty sure I have no recourse to modify how they operate in this regard, and thus have no chance, right?
All data is sensitive: the sites you visit, the pages you've viewed, etc. By aggregating this data across many pages, you can infer a lot about the user: their intent, interests, and so on. Hence, we need HTTPS everywhere. For more, see our Google I/O talk [1] on the subject.
In terms of detecting SPDY support, yes you want to use NPN/ALPN (ALPN is a successor [2]). The client sends a ProtocolNameList in their handshake, which advertises which protocols they support. Most servers will use this to auto-negotiate SPDY, but if you want to control this decision yourself, you'd have to modify your server implementation to invoke some sort of callback when the secure handshake is being performed.
That said, given what I said earlier about HTTPS everywhere, I would advise you against this altogether. Use HTTPS everywhere and let the browser and server auto-negotiate SPDY if its supported.
[1] https://www.youtube.com/watch?v=cBhZ6S0PFCY
[2] http://chimera.labs.oreilly.com/books/1230000000545/ch04.html#ALPN
I agree with igrigorik's advice: do not redirect users from HTTPS to HTTP. That's just not cool. Regardless, I had this detection problem today and my answer's below.
In NGINX (I'm running 1.7.7), the $spdy variable will be set if the client connects with SPDY connection. Otherwise, $spdy will not have a value. For example, I'm passing a custom URL parameter to a php script:
server {
listen 443 ssl spdy;
...
...
# add SPDY rewrite param
if ($spdy) {
rewrite ^/detect-spdy.js /detect-spdy.js.php?spdy=$spdy last;
}
# fallback to non-SPDY rewrite
rewrite ^/detect-spdy.js /detect-spdy.js.php last;
# add response header if needed later
add_header x-spdy $spdy;
}

Using Varnish with SaaS HTTPS backend servers?

I want to configure Varnish to use HTTPS(!) services as backend services. The key here is the SSL part of the connection to the backend service! I have limited control over those HTTPS backend services (think of them as SaaS services hosted in the cloud).
It's a setup like this: User-Agent -> AWS ELB as SSL terminator -> Varnish in AWS -> HTTPS SaaS services in the cloud
The reasons for that are as follows:
- I want to use Varnish ESI to decorate the SaaS service UI with my own custom page header & footer.
- By having all requests going through Varnish, I get additional analytics data about the SaaS service which I wouldn't get otherwise
- I can use Varnish to re-write URLs of the SaaS service effectively hiding the SaaS service URL from the end-users
I am able to use AWS ELB as SSL terminator towards the user-agent, but how do I get Varnish to access the HTTPS SaaS service as an origin server?
Background:
I work on a web portal where we will present a number of different services (all services have their own existing UI, i.e. they don't have headless RESP APIs!) to our customers. The main thing that pulls all those services together is a common page header and footer (page header shows top level navigation and login/username logout).
The types of services we have are as follows, all have their own UI layer which we don't want to replicate:
- White-labeled 3rd party SaaS service (think of e.g. Zendesk or Salesforce), hosted in the cloud
- In-house developed JavaEE/Spring services which are hosted in AWS
- Services that other teams in our company developed, but they are hosted in our own data center
Adding ESI includes is fine for each of those services, but I don't want to have to duplicate work of re-implementing the page header/footer multiple times for each service.
I ran into a similar requirement recently where the desired back-end needed to be accessed using https.
There are, of course, a lot of objections that could be raised as to why this is the wrong way to do it... but in this case, I was constrained by the fact that I needed the data encrypted to the back-end, a significant geographical distance, and the fact that a VPN was not possible because of the ownership and control of the various systems.
Here is a workaround that from my limited testing seems to have potential to solve this problem using stunnel4.
Sample lines from the configuration:
client = yes
[mysslconnect]
accept = 8888
connect = dest.in.ation.host.or.ip:443
Now, stunnel4 is listening to port 8888 on my local (varnish) machine, and each time an incoming connection arrives, it sets up an ssl connection to port 443 on the remote system.
A connection to 127.0.0.1 port 8888 on the local server allows me to speak cleartext HTTP to the destination back-end server, and this occurs over an SSL connection that is actually managed by stunnel4... so configuring varnish to use 127.0.0.1:8888 as the back-end does what I intend because varnish thinks it's speaking to an ordinary http server, unaware of what stunnel4 is doing on its behalf.
I can't vouch for the scalability or reliability of this, since I've only just not gotten it working, but so far it does seem to be a viable workaround to this limitation in varnish.
Accessing HTTPS backends in Varnish isn't supported. Varnish speaks HTTP to the backends.
If you want to access HTTPS backend content you'll have to proxy it through another daemon/proxy that adds/strips HTTPS. There are quite a few choices for this, one of which is stunnel which is tried and tested.
From what you are describing (rewriting content) I'd say that you are pretty close to using the wrong hammer. Varnish might not be the best tool for this, have you considered gluing things together with mod_rewrite/mod_substitude instead?
This is supported by Varnish Cache Plus which isn't free.
backend default {
.host = "backend.example.com";
.port = "443";
.ssl = 1; # Turn on SSL support
.ssl_sni = 1; # Use SNI extension (default: 1)
.ssl_verify_peer = 1; # Verify the peer's certificate chain (default: 1)
.ssl_verify_host = 1; # Verify the host name in the peer's certificate (default: 0)
}

AWS ELB Server-Side HTTPS

I have been working my way though setting up HTTPS using AWS. I have been attempting this with a self-signed certificate and am finding the process a bit problematic.
One question that has come up along the way is this business of server-side HTTPS. The client that I am working with requests that when a user hits the server the URL change to HTTPS. I am wondering if "Server-Side HTTPS" means that the protocol is transparent to the end-user?
Will they still see HTTP int the browser?
Thanks.
Don't know if this is an exact answer to your question, but rather perhaps a piece of advice. When using ELB, I have found it MUCH easier to install the SSL cert on the ELB and use SSL offload to forward requests from port 443 on ELB to port 80 on the EC2 instances.
The pros of this:
There is only one place where you need to install the cert rather than having to install across a number of instances (or update AMI and relaunch instances), making cert updates much easier to perform.
You get better performance on your web servers as they don't have to deal with SSL encryption.
Some cons:
The communication is not encrypted end-to-end so there is the technical (albeit unlikely) chance that the communication could be intercepted between ELB and servers. If you are dealing with something like PCI compliance this might matter to you.
If you needed to directly access one of the instances over HTTPS that would not be possible.
You may need to make sure your application is aware of the https-related headers (i.e. x-forwarded-proto) that the ELB injects into the request if your application needs to check whether the request is over HTTPS.
There is no reason that this configuration would disallow you from redirecting incoming requests over HTTP to HTTPS. You might however need to look the x-forwarded-proto header to do any web-server or application level redirects to HTTPS. The end user would not have any way of knowing that their HTTPS wrapper for their request was being offloaded at the ELB.

IIS cache with PURGE support

On Unix, I normally deploy nginx in front of Varnish in front of my application server. Both nginx and Varnish are acting as reverse proxies here. Varnish maintains a cache and supports things like If-Modified-Since, Cache-Control response headers and PURGE requests from the application. nginx is good at receiving a lot of connections. I also use it to serve some static content, enable gzip compression etc.
On Windows, I can manage with Squid in front of IIS. I'm planning to deploy my (Python) application as an ISAPI wildcard filter (using the isapi-wsgi package), so the application will live in a thread pool managed by IIS.
However, Squid development on Windows appears to have stalled, and I'd prefer to keep IIS on port 80, so that I can serve certain things directly from disk. I also suspect IIS is more resilient in handling lots of connections than Squid on Windows.
What do people normally use here? One option would be to use another free-standing caching proxy in front of IIS. Another option may be something installed as an ISAPI filter, which would intercept requests and respond to things like If-Modified-Since, requets for images and other cached resources, and PURGE requests from the application.
Does such a thing exist? Or are the only real choices Squid and MS ISA (too expensive).
Cheers,
Martin
IIS7 with Application Request Routing (see http://www.iis.net/download/ApplicationRequestRouting) supports full proxy caching on the same box or with the cache server in front of your middle tier.
Once ARR is installed, to enable proxy caching from the command line run the following:
%windir%\System32\inetsrv\appcmd.exe set config -section:system.webServer/diskCache /+"[path='C:\MyCacheFolder',maxUsage='0']" /commit:apphost
To vary caching based on query string, execute the following:
%windir%\System32\inetsrv\appcmd.exe set config -section:system.webServer/proxy /cache.queryStringHandling:"Accept" /commit:apphost
See the documentation link above for more details. Notice that static and dynamic content can have different caching strategies, etc. If you pursue using this, follow up with specific questions--it can be a bit of a trick lining everything up if you're looking for fine-grained control.

Resources