How to load balance with Nginx between a Linode and an EC2 instance?
I don't want to use subdomains like
www1.abc.com
www2.abc.com
So there are different IP addresses, but I just want see abc.com from "outside".
As per: http://mickeyben.com/blog/2009/12/30/using-nginx-as-a-load-balancer/
You would need once instance of Nginx running somewhere.
Related
I have a domain name mydomain.com registered on amazon route 53.
I have an EC2 instance in which I installed a docker portainer image under 9000 port.
My docker image run perfectly under ec2 public ip address:
http://xxx.xxx.xxx.xxx:9000
What I want now is to create a subdomain: portainer.mydomain.com and pointed it to my EC2 portainer instance.
When I try to create a new record set portainer.mydomaon.com and point it to my docker image instance I can't specify the port value.
I know I miss something, I'm on my beginning on DNS domains.
Route 53 is a DNS resolver. Its job is to resolve domain to ip address. It has nothing to do with port.
But there are some alternatives:
Add a secondary ip to the instance to host multiple websites and bind them to port 80. You add an additional ip by attaching elastic network interface (ENI).
Add Application Load Balancer with host based routing (you will get much more control, you can even do path based routing as well). See: Listeners for Your Application Load Balancers - Elastic Load Balancing
S3 redirection (Route 53 Record Set on Different Port)
How do I run two Laravel Docker apps on the same server using one container per app and point to two domains?
Both apps are on the same AWS ec2 server
eg:
container one points to -> one.mydomain.com
container two points to -> two.mydomain.com
I'm new to this.
Is it even possible?
an apache solution would be preferable.
Yes, it is a possible and also different way to that and will suggest to use AWS services.
Using AWS load balancer and Host-based routing and different port publish for each app
Nginx
With AWS approach you need to run your container using ECS.
Create Load balancer
Create cluster
Create service
Attached service to Load balancer and update load balancer routing to Host-based routing app1.example.com, will route to app1
Repeat the above step for app2.
The above is the standard way to deal with the container using AWS.
You can read more about this gentle-introduction-to-how-aws-ecs-works-with-example-tutorial and Run containerized applications in production
With Nginx, you need to manage everything for your self.
Run both containers on EC2
Install Nginx
Update Nginx configuration to route traffic based on DNS
Update DNS Entry and will point to EC2 instance public IP, both DNS, for example, app1.example.com and app2.example.com will point to same EC2 instance but the Nginx will decide which app will serve the request.
server {
server_name app1.example.com;
location / {
proxy_pass http://127.0.0.1:HOSTPORT;
}
}
server {
server_name app2.example.com;
location / {
proxy_pass http://127.0.0.1:HOSTPORT;
}
}
I will recommend these two approaches, Nginx over apache but if you are interested you can check this apache-vhosts
I have Jenkins running on EC2 myip:8080
I have a subdomain jenkins.mydomain.com that I want to point to my jenkins running on the EC2 instance.
What is the best approach to accomplish this?
Obviously I want it to load over HTTPS so I was thinking of perhaps creating a CloudFront distribution that would point to a S3 bucket which just redirects to the EC2 instance.
In your opinion, what is the best approach to accomplish this? Thanks
jenkins.mydomain.com —> nginx (SSL termination) —> EC2 Jenkins
If you wish you can also encrypt traffic from nginx to backend servers.
On a AWS EC2 ELB security profile - i need a couple of IP Address to be able to access only certain pages of my website. Is it possible? The other IP Address will have access to the full website. Is this achievable
This is not possible as a configuration in the Load Balancer because the Load Balancer simply distributes requests to your application servers.
Your application will need to enforce such functionality.
I have a test application running at
http://ec2-34-215-196-193.us-west-2.compute.amazonaws.com/
(This is a Test application, it wont be live for long. When I try to add a CNAME to this, like the screenshot below
. is added by the DNS system.
However, my app seems to be accessible only via us-west-2.compute.amazonaws.com or us-west-2.compute.amazonaws.com.
I can make it to resolve it either one of them.
But adding anything, does not seem to resolve with a CNAME. It gives 503 Service Unavailable.
I am using AWS EC2 to host the app with a HAProxy Load Balancer.
Using Google Domains for DNS Name.
Any suggestions for troubleshooting this problem?
All dns entries have a dot in the end like subdomain.domain.com.
It's not suggested to create CNAMEs to your ec2 instance because that IP may vary in time and it's not reassignable, that's what elastic ip's are made for, just create an elastic IP, assign it to your ec2 instance and assign it as an A record on your DNS provider.
Amazon AWS documentation
First create elastic IP and assign to your instance. Then create A record and point IP. Your site should work normal.