How do we enable HTTPS in Amazon EC2? Our site is working on HTTP.
First, you need to open HTTPS port (443). To do that, you go to https://console.aws.amazon.com/ec2/ and click on the Security Groups link on the left, then create a new security group with also HTTPS available.
Then, just update the security group of a running instance or create a new instance using that group.
After these steps, your EC2 work is finished, and it's all an application problem.
This answer is focused to someone that buy a domain in another site (as GoDaddy) and want to use the Amazon free certificate with Certificate Manager
This answer uses Amazon Classic Load Balancer (paid) see the pricing before using it
Step 1 - Request a certificate with Certificate Manager
Go to Certificate Manager > Request Certificate > Request a public certificate
On Domain name you will add myprojectdomainname.com and *.myprojectdomainname.com and go on Next
Chose Email validation and Confirm and Request
Open the email that you have received (on the email account that you have buyed the domain) and aprove the request
After this, check if the validation status of myprojectdomainname.com and *.myprojectdomainname.com is sucess, if is sucess you can continue to Step 2
Step 2 - Create a Security Group to a Load Balancer
On EC2 go to Security Groups > and Create a Security Group and add the http and https inbound
It will be something like:
Step 3 - Create the Load Balancer
EC2 > Load Balancer > Create Load Balancer > Classic Load Balancer (Third option)
Create LB inside - the vpc of your project
On Load Balancer Protocol add Http and Https
Next > Select exiting security group
Choose the security group that you have create in the previous step
Next > Choose certificate from ACM
Select the certificate of the step 1
Next >
on Health check i've used the ping path / (one slash instead of /index.html)
Step 4 - Associate your instance with the security group of load balancer
EC2 > Instances > click on your project > Actions > Networking > Change Security Groups
Add the Security Group of your Load Balancer
Step 5
EC2 > Load Balancer > Click on the load balancer that you have created > copy the DNS Name (A Record), it will be something like myproject-2021611191.us-east-1.elb.amazonaws.com
Go to Route 53 > Routes Zones > click on the domain name > Go to Records Sets
(If you are don't have your domain here, create a hosted zone with Domain Name: myprojectdomainname.com and Type: Public Hosted Zone)
Check if you have a record type A (probably not), create/edit record set with name empty, type A, alias Yes and Target the dns that you have copied
Create also a new Record Set of type A, name *.myprojectdomainname.com, alias Yes and Target your domain (myprojectdomainname.com). This will make possible access your site with www.myprojectdomainname.com and subsite.myprojectdomainname.com. Note: You will need to configure your reverse proxy (Nginx/Apache) to do so.
On NS copy the 4 Name Servers values to use on the next Step, it will be something like:
ns-362.awsdns-45.com
ns-1558.awsdns-02.co.uk
ns-737.awsdns-28.net
ns-1522.awsdns-62.org
Go to EC2 > Instances > And copy the IPv4 Public IP too
Step 6
On the domain register site that you have buyed the domain (in my case GoDaddy)
Change the routing to http : <Your IPv4 Public IP Number> and select Forward with masking
Change the Name Servers (NS) to the 4 NS that you have copied, this can take 48 hours to make effect
Amazon EC2 instances are just virtual machines so you would setup SSL the same way you would set it up on any server.
You don't mention what platform you are on, so it difficult to give any more information.
An old question but worth mentioning another option in the answers.
In case the DNS system of your domain has been defined in Amazon Route 53, you can use Amazon CloudFront service in front of your EC2 and attach a free Amazon SSL certificate to it. This way you will benefit from both having a CDN for a faster content delivery and also securing you domain with HTTPS protocol.
You can also use Amazon API Gateway. Put your application behind API Gateway. Please check this FAQ
There must be also an answer for people who want a hassle free https on ec2 for mainly demo and testing purposes, one way they can achieve that very fast is:
With my answer here which describes How you can achieve https for testing purposes in minutes with EC2 without the hassle of creating certificates
One of the best resources I found was using let's encrypt, you do not need ELB nor cloudfront for your EC2 instance to have HTTPS, just follow the following simple instructions:
let's encrypt
Login to your server and follow the steps in the link.
It is also important as mentioned by others that you have port 443 opened by editing your security groups
You can view your certificate or any other website's by changing the site name in this link
Please do not forget that it is only valid for 90 days
Use Elastic Load Balacing, it supports SSL termination at the Load Balancer, including offloading SSL decryption from application instances and providing centralized management of SSL certificates.
You need to register a domain(on GoDaddy for example) and put a load balancer in front of your ec2 instance - as DigaoParceiro said in his answer.
The issue is that domains generated by amazon on your ec2 instances are ephemeral. Today the domain is belonging to you, tomorrow it may not.
For that reason, let's encrypt throws an error when you try to register a certificate on amazon generated domain that states:
The ACME server refuses to issue a certificate for this domain name, because it is forbidden by policy
More details about this here:
https://community.letsencrypt.org/t/policy-forbids-issuing-for-name-on-amazon-ec2-domain/12692/4
You need to create a security group for HTTPS and assign it to your webserver:
Open the Amazon EC2 console.
Choose Security Groups in the navigation pane.
Choose Create Security Group.
For Create Security Group, do the following:
For the Security group name, type a name for the security group that you are creating.
(Optional) Type a description of the security group that you are creating.
For VPC, choose the VPC that contains your web server Amazon EC2 instance.
Choose Add Rule. For Type, choose HTTPS.
Choose Create.
In the navigation pane, choose Instances.
Select the check box next to your web server instance. Then choose Actions, Networking, and Change Security Groups.
Select the check box next to the security group that you created for HTTPS. Then choose Assign Security Groups.
To verify SSL/TLS offload with a web browser
Use a web browser to connect to your web server using the public DNS name or IP address of the server.
Ensure that the URL in the address bar begins with https://.
For example, https://ec2-52-14-212-67.us-east-2.compute.amazonaws.com/.
Related
I have an Azure web app up and running, using a custom domain purchased outside of Azure... and that all runs fine. So I have https://myappname.azurewebsites.net/ loading fine with my domain name URL https://www.myappname.com
I'm trying to upgrade the web app, though using Azure Traffic Manager. I've cloned the app a few times, each on its own app service plan, and I have the traffic manager all up and running fine. I can successfully hit different versions of my cloned website based on the traffic manager configuration profile... so no issues there.
The only issue is that I can only access the "traffic managed" version of my website via the standard azure URL -> myappname.trafficmanager.net.
All examples I've seen say all I really need to do now, is go into my DNS Management screen, and add domain forwarding, however, my online DNS management tool does not offer this option.
I can't really change my A record in the DNS management screen, because I don't know the IP address of myappname.trafficmanager.net
Every place I've tried to change the name of the current/working Azure URL (like in awverify text files, www cnames, etc.) does nothing. The DNS still points to the single instance which remains in the IP address od the DNS managers A record.
Also, since my live/single instance is linked to the domain name (along with the SSL binding), I can't add those properties to the clones, which makes sense....only one version can be live. However I could unbind that when I make the switch from the single instance web app to the traffic managed set of clones, but I fear I can only bind that to one of the clones. I can't seem to bind it to the myappname.trafficmanager.net version, which might cascade down to all of its endpoints. Is there a way to bind my domain name and SSL cert to more than one version of my web app?
Thanks!
Is there a way to bind my domain name and SSL cert to more than one
version of my web app?
I don't think you can do that unless you have two different domains or subdomains with each own SSL cert. Each web app hostname is unique globally and each SSL binding is attached with the web app domain name.
If you have a purchased domain and just keep the default xxx.azurewebsites.net as each hostname. Then you could configure the two Azure app serves as the endpoint of TM.
By default, Azure provided a wildcard cert for this domain *azurewebsites.net, so you can automatically access this hostname with HTTPS without any extra cert. Then use a CNAME record www in the domain domain.com in your DNS provider to point to the traffic manager hostname myappname.trafficmanager.net. Since Traffic Manager works as DNS level, it does not validate the server and client SSL, you could safely ignore the SSL warning when accessing with traffic manager hostname.
Feel free to let me know if you have any question.
The instance is running fine. I am using linux os and apache-tomcat-8.0.33 server. I can access from private ip using putty But when i am trying to access the same through the public ip, it is not accessible. I have seen the security configurations all ports are enabled.
Can anyone help me how to reslove this issue
inbound image
I faced the same issue recently; I was not able to access the website which I hosted on Ec2 server Via public IP.
Check 1:- the First step would check your AWS security group and make sure all the inbound traffic rules are fine.
Check 2:- Windows firewall can also play a role in disallowing the access via public IP. Create a new Rule for allowing access for HTTP and HTTPS ports (80,443).
Steps
a. Go to control panel -->Windows Firewall ---> Advanced Settings.
b. Select the Inbound rules from the left Menu.
c. Select New Rule from the Right panel.
d. Allow access to ports 80 and 443.
In my case, everything worked fine once I created a new rule in windows firewall under Inbound Rules.
You opened your amazon web console
You go to Amazon EC2 Security Groups
You should have a default group for inbound rules (see below)
You click on Modify inbound rules (modifier les règles entrantes in French here)
Once done, you add your public ip with the subnet you want
I've added my IP public address and you should be good.
Regardless of the number of ports open in your security group, if you must access your ec2 instance using it's public IP, over the internet, you must assign an internet gateway (IGW) to the subnet your ec2 instance belongs to
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Internet_Gateway.html
As you mentioned and others answers, you can find the problem by following this steps:
1- Try telnet to your server by public IP address on port 80, if it opens go to the next step, if not open you have two possible issues:
security group (Check your inbound rules)
web server settings (Check your web server settings and find why not listening on port 80)
2- If telnet was working, so you have not the connectivity issue, now track your web server access log by tail command and try open a page by the browser. If you see your request in the access log, but it does not return the correct value you expected, so you should check your web application.if you can't see your request, check your web server settings.
Please help on the below use-case.
We have an AWS EC2 instance with public IP or load balancer DNS --> public.ip or application.lb.amazonaws.com (where we have a custom web apps running as target)
We have another VM instance (e.g.: private.ip) within our Data Center (DC) (where the same web apps is running as source).
We need to have a web based communication between these 2 instances but currently its happening through HTTP. We have already handled all connectivity issues and we are able to now communicate between 2 instances.
We're accessing the source & target URL's as http://public.ip:31415 or application.lb.amazonaws.com:31416
Now we need to convert HTTP URL's in (4) to HTTPS along with a custom domain name. This domain name will not be PUBLIC & it will be resolved only within our office network. E.g Domain name: test.source.apps & test.target.apps
We would be making an entry in our local machine /etc/hosts (similar to below) to have this name resolution in (5) works for now in test & for other environments we planned to make an entry in our internal office DNS servers for this name resolution.
Example /etc/hosts:
Target:
test.target.app public.ip.ec2.server
(or) test.target.app application.lb.amazonaws.com
Source:
test.source.app data.center.ip
We don’t want any paid mode of SSL (like CA or public domain) due to the fact that this URL will be used only by 2 -3 developers and within the office network only. But as part of the security compliance we need to definitely make this a HTTPS URL.
Web apps are running in Jetty web server. We've planned to do it using LetsEncrypt + Custom domain.
Can anyone suggest if this possible in AWS & any steps on how to make this change (i.e. creating subdomain that is internal to our host/network &
using LetsEncrypt SSL)?
Currently I just create an instance on Amazon EC2 and I can ssh to the server. I installed Apache2 server and it is up but when I try to access via its public ip, then the browser come with timeout error. I have no idea related to EC2 and your idea would be worthwhile for me.
Thanks
From EC2 Docs:
If you are unable to see the Apache test page, check that the security group you are using contains a rule to allow HTTP (port 80) traffic. For information about adding an HTTP rule to your security group, see Adding Rules to a Security Group.
By default all traffics port are blocked for security reason. You need to add Inbound rules for allow http traffic(port 80)
You can add inbound rules by following these steps:
From EC2 Dashboard find "Security Groups" and then "Create Security Group"
Give a security group name and description and rules like shown in picture and then create security group.
Now you can access your ec2 public IP from anywhere.
I am developing an app (RoR + Heroku) which allows users create their own websites either using my subdomain (pagename.myapp.com) or using their own domain (pagename.com).
An important point of this is that this option is the key of my business: subdomains are the free plans and custom domains are the paid ones. So I have a table where I store the custom domains of each user and check if this page is active (exists and has paid the quota).
For that I need to give users the capability of point their domain to my servers. All we know that Heroku don't recommend the use of DNS A-Records.
Also I would like to abstract as much as possible this feature to being able to switch my infrastructure (Heroku to AWS) in the future without having to ask all my users to change their DNS Zone. Taking this into account, I think that the best option would be run something like an EC2 proxy (using AWS Elastic IP) which give me the ownership of this IP. This proxy I think that should redirect to proxy.myapp.com, and I would resolve the request in the app level.
Due to I didn't find clear information about that, I am not sure if this hypotesis is the best solution and how to setup the proxy (which type of proxy use? Nginx maybe?).
Said that, I would like to ask recommendations/best practices to solve this "common" feature.
Thanks
What you are wanting to do is fairly straight forward to implement. Your assumptions are correct about setting up the proxy. Nginx or haproxy will both work great for this (I personally would use haproxy). Here are some of the gotchas that you will run into though:
Changing the host header at a proxy server can cause the end web application to generate incorrect links. You can use relative paths to fix this, but it requires that the web application developer to be aware of the environment that they are running in.
user connects to www.example.com (proxy server)
proxy server connects to www.realdomain.com (web app)
the web app has a link for a shopping cart. www.realdomain.com/shoppingcart
the end user clicks on the link but the link is www.realdomain.com/shoppingcart instead of www.example.com/shoppingcart
The cost of the host acting as the proxy server. This can spiral out of control really quickly. For example, do you want redundancy, if so how are you planning on implementing that? Do you plan on having ssl termination? If so you will have to increase the CPU count to accommodate the additional load. Do you want to have a secure connection to heroku from your proxy? If you do then you will need to increase the CPU count for that as well. You may have to add additional ram as well depending on the number of concurrent connections.
Heroku also changes their load balancers regularly. This is important because your proxy service will need to reload the config / update the ip addresses of the heroku instances every 60 seconds. In my experience they may change once or twice a day, but the DNS entry that they use has a 60 second TTL. That means that you should make sure that you are capable of updating your config up to every 60 seconds.
My company has been doing something very similar to this for almost a year now. We use haproxy and simply have it reload the config regularly. We have never had an outage or an interruption to our end users. Nginx is also a very good product. It has built in DNS caching so if you go that route you will need to make sure that you configure it correctly so that the DNS cache TTL is 60 seconds.
Will many of your clients want to use your app on their domain apex? E.g. example.com rather than theapp.example.cpm? If not, I would recommend having them CNAME to proxy.myapp.com which CNAMEs to myapp.herokuapp.com. Then, you can update proxy.myapp.com without customer interruption.
If you do need apex or A record support, you would want to set up Nginx as a reverse proxy for your Heroku app. Keep in mind that if you need HTTPS support for client domains, you will need to do some sort of certificate management on your proxy.
I like the answer dtorgo gave and that he mentioned the TLS termination, which many online tutorials on custom domains don't touch at all.
I'll go into more detail on how to implement the custom domains feature for your SaaS while also handling the TLS/HTTPS.
If your customers just CNAME to your domain or create the A record to your IP and you don't handle TLS termination for these custom domains, your app will not support HTTPS, and without it, your app won't work in modern browsers on these custom domains.
You need to set up a TLS termination reverse proxy in front of your webserver. This proxy can be run on a separate machine but you can run it on the same machine as the webserver.
CNAME vs A record
If your customers want to have your app on their subdomain, e.g. app.customer.com they can create a CNAME app.customer.com pointing to your proxy.
If they want to have your app on their root domain, e.g. customer.com then they'll have to create an A record on customer.com pointing to your proxy's IP. Make sure this IP doesn't change, ever!
How to handle TLS termination?
To make TLS termination work, you'll have to issue TLS certificates for these custom domains. You can use Let's Encrypt for that. Your proxy will see the Host header of the incoming request, e.g. app.customer1.com or customer2.com etc., and then it will decide which TLS certificate to use by checking the SNI.
The proxy can be set up to automatically issue and renew certificates for these custom domains. On the first request from a new custom domain, the proxy will see it doesn't have the appropriate certificate. It will ask Let's Encrypt for a new certificate. Let's Encrypt will first issue a challenge to see if you manage the domain, and since the customer already created a CNAME or A record pointing to your proxy, that tells Let's Encrypt you indeed manage the domain, and it will let you issue a certificate for it.
To issue and renew certificates automatically, I'd recommend using Caddyserver, greenlock.js, OpenResty (Nginx).
tl;dr on what happens here;
Caddyserver listens on 443 and 80, it receives requests, issues, and renews certificates automatically, proxies traffic to your backend.
How to handle it on my backend
Your proxy is terminating TLS and proxying requests to your backend. However, your backend doesn't know who is the original customer behind the request. This is why you need to tell your proxy to include additional headers in proxied requests to identify the customer. Just add X-Serve-For: app.customer.com or X-Serve-For: customer2.com or whatever the Host header is of the original request.
Now when you receive the proxied request on the backend, you can read this custom header and you know who is the customer behind the request. You can implement your logic based on that, show data belonging to this customer, etc.
More
Put a load balancer in front of your fleet of proxies for higher availability. You'll also have to use distributed storage for certificates and Let's Encrypt challenges. Use AWS ECS or EBS for automated recovery if something fails, otherwise, you may be waking up in the middle of the night restarting machines, or your proxy manually.
If you need more detail you can DM me on Twitter #dragocrnjac