Is it possible to host mysite.com/ from ec2 and mysite.com/logo.gif from cloudfront?
No, you won't be able to make the part of the URL after the domain name influence the DNS lookup for mysite.com. However, if you're willing to settle for something like "images.mysite.com/logo.gif", you can easily resolve images.mysite.com to your CloudFront distribution using a CNAME.
You could also configure the web server on your EC2 instance to redirect or proxy to CloudFront - but then your server is still getting hit every time that resource is loaded, which eliminates most of the benefit to using a CDN in the first place.
In a way, you can. You would need to use a reverse proxy on your web server at mysite.com.
http://en.wikipedia.org/wiki/Reverse_proxy
To agree with David (above), you can set up a DNS CNAME for your CloudFront distribution, but the best you could do would be a subdomain of your site. It's a better way to do things anyway, if you follow Yahoo! or Google website performance guidelines.
developer.yahoo.com/performance/rules.html
code.google.com/speed/page-speed/docs/rules_intro.html
Related
We are using CloudFlare service for CDN, Security and other services. And we are using Ajaxsnapshot for creating snapshots for Search Bots. The problem is we are getting Error 1000 - DNS points to incorrect IP. When we switch off CLoudFlare settings, Ajaxsnapshot API works and is able to create snapshots.
How to solve it so we can use both the services?
You should contact CloudFlare support so we can look at your DNS zone file. It sounds like something isn't set properly in DNS, or you're pointing to an IP that it shouldn't be.
I am developing an app (RoR + Heroku) which allows users create their own websites either using my subdomain (pagename.myapp.com) or using their own domain (pagename.com).
An important point of this is that this option is the key of my business: subdomains are the free plans and custom domains are the paid ones. So I have a table where I store the custom domains of each user and check if this page is active (exists and has paid the quota).
For that I need to give users the capability of point their domain to my servers. All we know that Heroku don't recommend the use of DNS A-Records.
Also I would like to abstract as much as possible this feature to being able to switch my infrastructure (Heroku to AWS) in the future without having to ask all my users to change their DNS Zone. Taking this into account, I think that the best option would be run something like an EC2 proxy (using AWS Elastic IP) which give me the ownership of this IP. This proxy I think that should redirect to proxy.myapp.com, and I would resolve the request in the app level.
Due to I didn't find clear information about that, I am not sure if this hypotesis is the best solution and how to setup the proxy (which type of proxy use? Nginx maybe?).
Said that, I would like to ask recommendations/best practices to solve this "common" feature.
Thanks
What you are wanting to do is fairly straight forward to implement. Your assumptions are correct about setting up the proxy. Nginx or haproxy will both work great for this (I personally would use haproxy). Here are some of the gotchas that you will run into though:
Changing the host header at a proxy server can cause the end web application to generate incorrect links. You can use relative paths to fix this, but it requires that the web application developer to be aware of the environment that they are running in.
user connects to www.example.com (proxy server)
proxy server connects to www.realdomain.com (web app)
the web app has a link for a shopping cart. www.realdomain.com/shoppingcart
the end user clicks on the link but the link is www.realdomain.com/shoppingcart instead of www.example.com/shoppingcart
The cost of the host acting as the proxy server. This can spiral out of control really quickly. For example, do you want redundancy, if so how are you planning on implementing that? Do you plan on having ssl termination? If so you will have to increase the CPU count to accommodate the additional load. Do you want to have a secure connection to heroku from your proxy? If you do then you will need to increase the CPU count for that as well. You may have to add additional ram as well depending on the number of concurrent connections.
Heroku also changes their load balancers regularly. This is important because your proxy service will need to reload the config / update the ip addresses of the heroku instances every 60 seconds. In my experience they may change once or twice a day, but the DNS entry that they use has a 60 second TTL. That means that you should make sure that you are capable of updating your config up to every 60 seconds.
My company has been doing something very similar to this for almost a year now. We use haproxy and simply have it reload the config regularly. We have never had an outage or an interruption to our end users. Nginx is also a very good product. It has built in DNS caching so if you go that route you will need to make sure that you configure it correctly so that the DNS cache TTL is 60 seconds.
Will many of your clients want to use your app on their domain apex? E.g. example.com rather than theapp.example.cpm? If not, I would recommend having them CNAME to proxy.myapp.com which CNAMEs to myapp.herokuapp.com. Then, you can update proxy.myapp.com without customer interruption.
If you do need apex or A record support, you would want to set up Nginx as a reverse proxy for your Heroku app. Keep in mind that if you need HTTPS support for client domains, you will need to do some sort of certificate management on your proxy.
I like the answer dtorgo gave and that he mentioned the TLS termination, which many online tutorials on custom domains don't touch at all.
I'll go into more detail on how to implement the custom domains feature for your SaaS while also handling the TLS/HTTPS.
If your customers just CNAME to your domain or create the A record to your IP and you don't handle TLS termination for these custom domains, your app will not support HTTPS, and without it, your app won't work in modern browsers on these custom domains.
You need to set up a TLS termination reverse proxy in front of your webserver. This proxy can be run on a separate machine but you can run it on the same machine as the webserver.
CNAME vs A record
If your customers want to have your app on their subdomain, e.g. app.customer.com they can create a CNAME app.customer.com pointing to your proxy.
If they want to have your app on their root domain, e.g. customer.com then they'll have to create an A record on customer.com pointing to your proxy's IP. Make sure this IP doesn't change, ever!
How to handle TLS termination?
To make TLS termination work, you'll have to issue TLS certificates for these custom domains. You can use Let's Encrypt for that. Your proxy will see the Host header of the incoming request, e.g. app.customer1.com or customer2.com etc., and then it will decide which TLS certificate to use by checking the SNI.
The proxy can be set up to automatically issue and renew certificates for these custom domains. On the first request from a new custom domain, the proxy will see it doesn't have the appropriate certificate. It will ask Let's Encrypt for a new certificate. Let's Encrypt will first issue a challenge to see if you manage the domain, and since the customer already created a CNAME or A record pointing to your proxy, that tells Let's Encrypt you indeed manage the domain, and it will let you issue a certificate for it.
To issue and renew certificates automatically, I'd recommend using Caddyserver, greenlock.js, OpenResty (Nginx).
tl;dr on what happens here;
Caddyserver listens on 443 and 80, it receives requests, issues, and renews certificates automatically, proxies traffic to your backend.
How to handle it on my backend
Your proxy is terminating TLS and proxying requests to your backend. However, your backend doesn't know who is the original customer behind the request. This is why you need to tell your proxy to include additional headers in proxied requests to identify the customer. Just add X-Serve-For: app.customer.com or X-Serve-For: customer2.com or whatever the Host header is of the original request.
Now when you receive the proxied request on the backend, you can read this custom header and you know who is the customer behind the request. You can implement your logic based on that, show data belonging to this customer, etc.
More
Put a load balancer in front of your fleet of proxies for higher availability. You'll also have to use distributed storage for certificates and Let's Encrypt challenges. Use AWS ECS or EBS for automated recovery if something fails, otherwise, you may be waking up in the middle of the night restarting machines, or your proxy manually.
If you need more detail you can DM me on Twitter #dragocrnjac
I read this:
"I can't use this until I can serve my root domain without redirection to "www". Can Amazon designate an IP address (or set of IP addresses) for S3 that I can point my root A record to?"
Is it still true that I need to have a domain host just as a proxy to S3 and setup CNAME to point a subdomain to S3 bucket? And there is no better way?
There are no better ways, only most costly ways.
You can set up an EC2 image with a proxy, and allow the proxy to access S3 on your behalf, while accessing the remainder of your web site somewhere else. Since scalability is a concern of yours, you'll also want to use the automatic scaling tools for EC2 as access to your proxy grows.
Or, just re-architect your application to use the CNAME-based approach for all content located in S3.
so I'm looking for a server host for a website that I'm building. Generally I know from continually visiting sites which ones I like and which I don't. I think this is a much better way than simply measuring ping times to determine speed.
So I want to know if there's a way to find out which hosting companies are hosting certain domains. Is this possible? whois.domaintools.com tells you certain information about the namesevers and the other about the DNS information about the host IP location. This is fine but I still don't get the website URL where I too can sign up for hosting. Often times the IP location name resolves to something really formal like XYZ Mallard Group Company LTD so this is basically useless to me. I need the hosting web URL instead.
Anyone got any ideas how I can find a domain's server hosting website url?
Thanks,
Marv
http://www.whois.net/ Will tell you the DNS (which a lot of time has the one you can sign up for, i.e. dns.dreamhost.com points you to dreamhost.com), and it also gives you more info about the registrar.
Finding hostnames of Ip's of server and DNS will help you to identify hosting provider
We usually blacklist IPs address with iptables. But in Amazon EC2, if a connection goes through the Elastic Load Balancer, the remote address will be replaced by the load balancer's address, rendering iptables useless. In the case for HTTP, apparently the only way to find out the real remote address is to look at the HTTP header HTTP_X_FORWARDED_FOR. To me, blocking IPs at the web application level is not an effective way.
What is the best practice to defend against DoS attack in this scenario?
In this article, someone suggested that we can replace Elastic Load Balancer with HAProxy. However, there are certain disadvantages in doing this, and I'm trying to see if there is any better alternatives.
I think you have described all the current options. You may want to chime in on some of the AWS forum threads to vote for a solution - the Amazon engineers and management are open to suggestions for ELB improvements.
If you deploy your ELB and instances using VPC instead of EC2-classic, you can use Security Groups and Network ACLs to restrict access to the ELB.
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/USVPC_ApplySG.html
It's common to run an application server behind a reverse proxy. Your reverse proxy is layer you can use to add DoS protection before traffic gets to your application server. For Nginx, you can look at the rate limiting module as something that could help.
You could set up an EC2 host and run haproxy there by yourself (that's what Amazon is using anyways!). Then you can apply your iptables-filters on that system.
Here's a tool I made for those looking to use Fail2Ban on aws with apache, ELB, and ACL: https://github.com/anthonymartin/aws-acl-fail2ban