Run Two Laravel applications with docker on same server pointing to subdomain - laravel

How do I run two Laravel Docker apps on the same server using one container per app and point to two domains?
Both apps are on the same AWS ec2 server
eg:
container one points to -> one.mydomain.com
container two points to -> two.mydomain.com
I'm new to this.
Is it even possible?
an apache solution would be preferable.

Yes, it is a possible and also different way to that and will suggest to use AWS services.
Using AWS load balancer and Host-based routing and different port publish for each app
Nginx
With AWS approach you need to run your container using ECS.
Create Load balancer
Create cluster
Create service
Attached service to Load balancer and update load balancer routing to Host-based routing app1.example.com, will route to app1
Repeat the above step for app2.
The above is the standard way to deal with the container using AWS.
You can read more about this gentle-introduction-to-how-aws-ecs-works-with-example-tutorial and Run containerized applications in production
With Nginx, you need to manage everything for your self.
Run both containers on EC2
Install Nginx
Update Nginx configuration to route traffic based on DNS
Update DNS Entry and will point to EC2 instance public IP, both DNS, for example, app1.example.com and app2.example.com will point to same EC2 instance but the Nginx will decide which app will serve the request.
server {
server_name app1.example.com;
location / {
proxy_pass http://127.0.0.1:HOSTPORT;
}
}
server {
server_name app2.example.com;
location / {
proxy_pass http://127.0.0.1:HOSTPORT;
}
}
I will recommend these two approaches, Nginx over apache but if you are interested you can check this apache-vhosts

Related

Application deployed on kubernetes in aws ec2 instances is not accessible without nginx external ip port number

I have deployed microservice based applications in EC2 instances kubernetes set-up.
my web application is accessible if I add port no of external IP of ingress-nginx with url. but I want it to be accessible with out port no.
same deployment is working without port no in on-prem setup.
all ports are open in aws security settings.

Amazon aws route53, redirect subdomain to ec2 app running under specific port

I have a domain name mydomain.com registered on amazon route 53.
I have an EC2 instance in which I installed a docker portainer image under 9000 port.
My docker image run perfectly under ec2 public ip address:
http://xxx.xxx.xxx.xxx:9000
What I want now is to create a subdomain: portainer.mydomain.com and pointed it to my EC2 portainer instance.
When I try to create a new record set portainer.mydomaon.com and point it to my docker image instance I can't specify the port value.
I know I miss something, I'm on my beginning on DNS domains.
Route 53 is a DNS resolver. Its job is to resolve domain to ip address. It has nothing to do with port.
But there are some alternatives:
Add a secondary ip to the instance to host multiple websites and bind them to port 80. You add an additional ip by attaching elastic network interface (ENI).
Add Application Load Balancer with host based routing (you will get much more control, you can even do path based routing as well). See: Listeners for Your Application Load Balancers - Elastic Load Balancing
S3 redirection (Route 53 Record Set on Different Port)

SSL in EC2 without ELB

I am running a Spring Boot application in EC2. I want to make the API calls as HTTPS instead of HTTP.
This is what I did:
Brought a domain in godaddy and configured it in Route 53.
Created a cert from AWS certificate manager.
Created a Load Balancer and added the cert.
In route 53 directed my traffic to ELB.
Above things are working fine now. I have only one instance. Use of ELB is only for SSL. But I want to get rid of ELB as it is costing me more
Is there any other way I can make the API calls as HTTPS for spring boot application running on ec2 without ELB?
ELB can be quite expensive, and most of all useless if you have only one instance.
Try to put CloudFront in....well...front of your instance. You get the benefit of managing AWS certificates in the same way you are doing with the LB, and also you can take advantage of caching and edge locations.
You can also redirect Route53 to CloudFront, just add a CNAME to your hosted zone that reference the cloud front DNS.

Move application from homestead to docker

My application consists of three domains:
example.com
admin.example.com
partner.example.com
All of these domains are handled by the same Laravel app. Each domain has its own controllers and view. Models and other core functionalities are shared between all three domains.
Currently my local dev-environment is build with Homestead (based on Vagrant), where each local domain (example.test, admin.example.test and partner.example.test) points to the same directory (e.g. /home/vagrant/app/public).
Because of deployment problems regarding different versions of OS, NPM, PHP, etc. I want to move to docker. I've read a lot of articles about multiple domains or apps with docker. Best practice seems to be to set up an Nginx reverse proxy which redirects all incoming requests to the desired app. Unfortunately, I haven't found examples for my case where all domains point to the same application.
If possible I would avoid having the same repository cloned three times for each docker container running one specific part of the app.
So what would be the best approach to set up a docker environment?
I created a simple gist for you to look at of how I would do it
https://gist.github.com/karlisabele/f7d91594c004e227e504473ce2c60508
The nginx config file is based on Laravel documetation (https://laravel.com/docs/5.8/deployment#nginx) and of course in production you would also want to handle SSL and map port 443 as well, but this should serve as POC for you.
Notice that in the nginx configuration I use the php-fpm service name to pass the request to php-fpm container. In docker the service names can be used as host names for corresponding service so the line fastcgi_pass php-fpm:9000; means that you are passing the request to php-fpm containers port 9000 (default port for the fpm image to listen to)
Basically what you want to do is simply define in the nginx that all 3 of your subdomains are handled by the same server configuration. Then nginx simply passes the request to php-fpm to actually process it.
To test, you can just copy the two files from gist in your project directory, replace YOUR_PROJECT_FOLDER in docker-compose.yml file with the actual location of your project (can be simply .:/var/www/html if you place the docker-compose.yml in the root of your project) then run docker-compose up -d. Add the domains to your hosts file (/etc/hosts un linux/mac) and you should be able to visit example.test and see your site.
Note: Depending on where your database is located, you might need to change the host for it if it's localhost at the moment, because it will try to connect to a mysql server from php-fpm container, which of course does not have it's own mysql-server running.

How to serve a Heroku app with Google cloud fixed IP

I have a Heroku app that uses nodejs to serve a static web page https://foda-app.herokuapp.com
Heroku does not provide a fixed IP and I really need one for a personal project, so I'm trying to use Google Cloud's VPC reserved static external IP addresses.
I was able to reserve the IP but I'm not sure how should I link it with my Heroku app, since the Google Cloud offers so many options and services. I just wanna redirect all traffic from this IP to the Heroku app and I can't find a simple way to do it.
I need to create a global forwarding rule but I can't find a way to achieve this without using a lot of other services. Do I need a VM instance? Do I need a load balancer? Should I use VPC routes or Cloud DNS? I'm overwhelmed with all those services.
Please can someone tell me if it's possible, and what is the simplest way to achieve this?
You can achieve this using below two ways. -
Use a third party addon on heroku. eg. https://devcenter.heroku.com/articles/quotaguardstatic
Setup a proxy server on the static IP, and redirect all traffic to the desired Heroku url.
Details for step 2 -
Assigning a static external IP address to a new VM instance https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address
Install Nginx/HAProxy on the newly procured VM.
setup config. like below -
upstream heroku-1{
server foda-app.herokuapp.com fail_timeout=15s;
}
server{
listen 80;
server_name yourdomain.example or ip address
location / {
proxy_pass http://heroku-1;
proxy_read_timeout 300;
}
}
Change DNS mapping for your domain(if any) to point to the static IP.

Resources