Can I use CloudFlare's Proxy with an AWS Application Load Balancer? - amazon-ec2

I have created a listener on the LB (load balancer) with rules so that requests to different subdomains. I have set a CNAME for each subdomain in Cloudflare.
The problem is when I try to use Proxy feature, when I turn it off, my page works without a problem, but when i turn it on, it results in a time out.
There is a way to use Proxy feature with an LB?

The very simple answer is yes, this is very possible to do.
The longer answer is that a timeout indicates that some change in the request by Cloudflare creates a timeout for your AWS servers. Just a couple of examples could be:
A firewall rule that times out Cloudflare's IPs
Servers are not respecting X-Forwarded-For header and all requests from a small group of IPs are messing with application logic
It would help to know if the requests are reaching your load balancer and furthermore if the servers behind the LB are receiving the requests.

Related

Load testing a web app which has a load balancer

I wrote a Jmeter test (that uses different user credentials) to load test a web app which has a load balancer and all it forwards the requests to a single node. How can I solve this?
I used the DNS Cache manager but that did not work.
Are there any other tools which I could use? (I looked into AWS Load testing but that too won't work because all the containers would get the same set of user credentials and when parallel tests are run they would fail.)
It depends on the load balancing mechanism used in your load balancer, it might be the case it's looking into the source IP address and forwarding requests from the same IP to the same backend node. You can try using multiple IP addresses (or aliases) and see whether it makes the difference. See IP Spoofing With JMeter: How to Simulate Requests from Different IP Addresses article for more details.
Also adding DNS Cache Manager might be not sufficient, you can try configurign a custom DNS resolver, i.e. 1.1.1.1 as the DNS server so each thread would resolve the underlying IP address on its own

Loadbalancing web sockets - AWS Elastic Loadbalancer

I have a question about how to load balance web sockets with AWS elastic load balancer.
I have 2 EC2 instances behind AWS elastic load balancer.
When any user login, the user session will be established with one of the server, say EC2 instance1. Now, all the requests from the same user will be routed to EC2 instance1.
Now, I have a different stateless request coming from a different system. This request will have userId in it. This request might end up going to a EC2 instance2. We are supposed to send a notification to the user based on the userId in the request.
Now,
1) Assume, the user session is with the EC2 instance1, but the notification is originating from the EC2 instance2.
I am not sure how to notify the user browser in this case.
2) Is there any limitation on the websocket connection like 64K and how to overcome with multiple servers, since user is coming thru Load balancer.
Thanks
You will need something else to notify the browser's websocket's server end about the event coming from the other system. There are a couple of publish-subscribe based solution which might help, but without knowing more details it is a bit hard to figure out which solution fits the best. Redis is generally a good answer, and Elasticache supports it.
I found this regarding to AWS ELB's limits:
http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_elastic_load_balancer
But none of them seems to be related to your question.
Websocket requests start with HTTP communication before handing over to websockets. In theory if you could include a cookie in that initial HTTP request then the sticky session features of ELB would allow you to direct websockets to specific EC2 instances. However, your websocket client may not support this.
A preferred solution would be to make your EC2 instances stateless. Store the websocket session data in AWS Elasticache (Either Redis or Memcached) and then incoming connections will be able to access the session regardless of which EC2 instance is used.
The advantage of this solution is that you remove the dependency on individual EC2 instances and your application will scale and handle failures better.
If the ELB has too many incoming connections, then it should scale automatically. Although I can't find a reference for that. ELB's are relatively slow to scale - minutes rather than seconds, if you are expecting surges in traffic then AWS can "pre-warm" more ELB resource for you. This is done via support requests.
Also, factor in the ELB connection time out. By default this is 60 seconds, it can be increased via the AWS console or API. Your application needs to send at least 1 byte of traffic before the timeout or the ELB will drop the connection.
Recently had to hook up crossbar.io websockets with ALB. Basically there are two things to consider. 1) You need to set stickiness to 1 day on the target group attributes. 2) You either need something on the same port that returns static webpage if connection is not upgraded, or a separate port serving a static webpage with a custom health check specifying that port on the target group. Go for a ALB over ELB, ALB's have support for ws:// and wss://, they only lack the health check over websockets.

AWS ELB Server-Side HTTPS

I have been working my way though setting up HTTPS using AWS. I have been attempting this with a self-signed certificate and am finding the process a bit problematic.
One question that has come up along the way is this business of server-side HTTPS. The client that I am working with requests that when a user hits the server the URL change to HTTPS. I am wondering if "Server-Side HTTPS" means that the protocol is transparent to the end-user?
Will they still see HTTP int the browser?
Thanks.
Don't know if this is an exact answer to your question, but rather perhaps a piece of advice. When using ELB, I have found it MUCH easier to install the SSL cert on the ELB and use SSL offload to forward requests from port 443 on ELB to port 80 on the EC2 instances.
The pros of this:
There is only one place where you need to install the cert rather than having to install across a number of instances (or update AMI and relaunch instances), making cert updates much easier to perform.
You get better performance on your web servers as they don't have to deal with SSL encryption.
Some cons:
The communication is not encrypted end-to-end so there is the technical (albeit unlikely) chance that the communication could be intercepted between ELB and servers. If you are dealing with something like PCI compliance this might matter to you.
If you needed to directly access one of the instances over HTTPS that would not be possible.
You may need to make sure your application is aware of the https-related headers (i.e. x-forwarded-proto) that the ELB injects into the request if your application needs to check whether the request is over HTTPS.
There is no reason that this configuration would disallow you from redirecting incoming requests over HTTP to HTTPS. You might however need to look the x-forwarded-proto header to do any web-server or application level redirects to HTTPS. The end user would not have any way of knowing that their HTTPS wrapper for their request was being offloaded at the ELB.

Block IPs on heroku

I haven't found this any where in heroku's documentation or on google. Typically this is done in the host file. Does anyone have any idea how to block an ip on heroku?
Heroku doesn't have a firewall that you can use to block IPs, so you would have to either block it at the application level, or you would have to put a proxy of some sort in front of your application that you can use to block the IP. One common one people use is CloudFlare, which has support for blocking individual IPs, rate limiting IPs, etc.
If you want to stick with blocking at the application level, and are using Ruby on Rails, this question and answer which might give you what you're looking for: How can you block or filter IP addresses on Heroku?
Since this was first answered, Heroku has added a Web Application Firewall in their AddOn Marketplace: https://elements.heroku.com/addons/expeditedwaf
It can block inbound requests by IP, UserAgent, Referrer, Country, etc.

Load balancing with nginx

I want to stop serving requests to my back end servers if the load on those servers goes above a certain level. Anyone who is already surfing the site will still get routed but new connection will be sent to a static server busy page until the load drops below a pre determined level.
I can use cookies to let the current customers in but I can't find information on how to to routing based on a custom load metric.
Can anyone point me in the right direction?
Nginx has an HTTP Upstream module for load balancing. Checking the responsiveness of the backend servers is done with the max_fails and fail_timeout options. Routing to an alternate page when no backends are available is done with the backup option. I recommend translating your load metrics into the options that Nginx supplies.
Let's say though that Nginx is still seeing the backend as being "up" when the load is higher than you want. You may be able to adjust that further by tuning the max connections of the backend servers. So, maybe the backend servers can only handle 5 connections before the load is too high, so you tune it only allow 5 connections. Then on the front-end, Nginx will time-out immediately when trying to send a sixth connection, and mark that server as inoperative.
Another option is to handle this outside of Nginx. Software like Nagios can not only monitor load, but can also proactively trigger actions based on the monitor it does.
You can generate your Nginx configs from a template that has options to mark each upstream node as up or down. When a monitor detects that the upstream load is too high, it could re-generate the Nginx config from the template as appropriate and then reload Nginx.
A lightweight version of the same idea could done with a script that runs on the same machine as your Nagios server, and performs simple monitoring as well as the config file updates.

Resources