I have Jenkins running on EC2 myip:8080
I have a subdomain jenkins.mydomain.com that I want to point to my jenkins running on the EC2 instance.
What is the best approach to accomplish this?
Obviously I want it to load over HTTPS so I was thinking of perhaps creating a CloudFront distribution that would point to a S3 bucket which just redirects to the EC2 instance.
In your opinion, what is the best approach to accomplish this? Thanks
jenkins.mydomain.com —> nginx (SSL termination) —> EC2 Jenkins
If you wish you can also encrypt traffic from nginx to backend servers.
Related
I’m trying to use HTTPS on my EC2 instance.
Currently, my URL looks like this: 192.168.0.1:8443 and works great.
However, due to HTTPS requirements by Stripe and other applications, I need the url to look like this: https://dev.domain.com
I should add that I am using Cloudflare as my DNS Manager.
I’ve tried Googling how to set this up with no luck. Maybe I’m searching for the wrong thing.
Can someone help me achieve this setup?
Thank you in advance!
You need to configure route53 to create a hosted zone for your website and then you need to add record set where you will point your ec2 server's ip for the particular website request.please follow the link for detailed instructions to setup website with ec2
AWS link
Which web server (httpd/IIS) you enabling on this EC2 instance?
try these steps if it is Linux box
SSL-on-an-instance
I ended up adding rules for ports 80 and 443 to my EC2 instance, and then telling Apache to listen on port 80 instead of 8443. This allowed me to remove the appended :8443 in the URL and I was able to copy the DNS info into Cloudflare as a CNAME and begin using my domain name. Before, I wasn’t able to use my server info as it had to have :8443 appended to the URL which Cloudflare doesn’t like.
What would be the simplest way to associate a CloudFront distribution with an EC2 server?
Public IP/DNS keeps changing as I shutdown/start the instance.
The simplest way would probably be to associate an Elastic IP.
Though similar to Amazon EC2 How Do I Host a PDF File on my Instance? this is not a simple case of case sensitivity.
I currently have the file I'd like to be publicly available in /var/www/html which is the DocumentRoot (though note we've also got amazon EBS set up) but nonetheless going to ourinstance/file.pdf gives 404 not found.
I'd like to avoid having to use S3.
You need to setup a web server (apache/nginx) and configure it to serve the file (from a domain). In order to do that, you need to setup a virtual host (or its equivalent in nginx).
Once done, you start the server and assuming DNS settings are correctly setup, your file should be served.
I think this question is better suited to be asked on serverfault rather than stackoverflow.
I hired a freelancer to develop a PHP CI application hosted on Amazon EC2, and the app doesn't work. I am using Wowza with EC2 and S3. I have been seeing permission denied problems. I have Ubuntu and I'm trying to install a LAMP server and run public DNS on the instance. I have set up SSH as well.
I found the elastic IP of the instance we are running and used GoDaddy domain manager. I thought that simply pointing the domain to the instance would work. Do I have to change the nameservers on GoDaddy's side as well? Where would I find the right ones?
I have very little server-side understanding. I'm sure the solution is just a simple change, something like one line of code, a different user name or a different ID number. What do I need to do?
you should point your domain to your elastic ip of your EC2 instance. This should be done from where you host your DNS server. If you don't have one, you can change the settings inside the godaddy account to point to your DNS service.
We usually blacklist IPs address with iptables. But in Amazon EC2, if a connection goes through the Elastic Load Balancer, the remote address will be replaced by the load balancer's address, rendering iptables useless. In the case for HTTP, apparently the only way to find out the real remote address is to look at the HTTP header HTTP_X_FORWARDED_FOR. To me, blocking IPs at the web application level is not an effective way.
What is the best practice to defend against DoS attack in this scenario?
In this article, someone suggested that we can replace Elastic Load Balancer with HAProxy. However, there are certain disadvantages in doing this, and I'm trying to see if there is any better alternatives.
I think you have described all the current options. You may want to chime in on some of the AWS forum threads to vote for a solution - the Amazon engineers and management are open to suggestions for ELB improvements.
If you deploy your ELB and instances using VPC instead of EC2-classic, you can use Security Groups and Network ACLs to restrict access to the ELB.
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/USVPC_ApplySG.html
It's common to run an application server behind a reverse proxy. Your reverse proxy is layer you can use to add DoS protection before traffic gets to your application server. For Nginx, you can look at the rate limiting module as something that could help.
You could set up an EC2 host and run haproxy there by yourself (that's what Amazon is using anyways!). Then you can apply your iptables-filters on that system.
Here's a tool I made for those looking to use Fail2Ban on aws with apache, ELB, and ACL: https://github.com/anthonymartin/aws-acl-fail2ban