Move application from homestead to docker - laravel

My application consists of three domains:
example.com
admin.example.com
partner.example.com
All of these domains are handled by the same Laravel app. Each domain has its own controllers and view. Models and other core functionalities are shared between all three domains.
Currently my local dev-environment is build with Homestead (based on Vagrant), where each local domain (example.test, admin.example.test and partner.example.test) points to the same directory (e.g. /home/vagrant/app/public).
Because of deployment problems regarding different versions of OS, NPM, PHP, etc. I want to move to docker. I've read a lot of articles about multiple domains or apps with docker. Best practice seems to be to set up an Nginx reverse proxy which redirects all incoming requests to the desired app. Unfortunately, I haven't found examples for my case where all domains point to the same application.
If possible I would avoid having the same repository cloned three times for each docker container running one specific part of the app.
So what would be the best approach to set up a docker environment?

I created a simple gist for you to look at of how I would do it
https://gist.github.com/karlisabele/f7d91594c004e227e504473ce2c60508
The nginx config file is based on Laravel documetation (https://laravel.com/docs/5.8/deployment#nginx) and of course in production you would also want to handle SSL and map port 443 as well, but this should serve as POC for you.
Notice that in the nginx configuration I use the php-fpm service name to pass the request to php-fpm container. In docker the service names can be used as host names for corresponding service so the line fastcgi_pass php-fpm:9000; means that you are passing the request to php-fpm containers port 9000 (default port for the fpm image to listen to)
Basically what you want to do is simply define in the nginx that all 3 of your subdomains are handled by the same server configuration. Then nginx simply passes the request to php-fpm to actually process it.
To test, you can just copy the two files from gist in your project directory, replace YOUR_PROJECT_FOLDER in docker-compose.yml file with the actual location of your project (can be simply .:/var/www/html if you place the docker-compose.yml in the root of your project) then run docker-compose up -d. Add the domains to your hosts file (/etc/hosts un linux/mac) and you should be able to visit example.test and see your site.
Note: Depending on where your database is located, you might need to change the host for it if it's localhost at the moment, because it will try to connect to a mysql server from php-fpm container, which of course does not have it's own mysql-server running.

Related

Communicating between ddev projects via http/s

I've been using DDev for the last six months or so. It has greatly improved my efficiency. Thanks!
I'm looking for a better way to integrate multiple sites running on separate containers. The recommended solution is to use the internal container references (e.g. ddev-projectname-web). This does not work one of my projects because the destination site relies on a matching hostname for authentication.
Scenario: SiteA communicates with SiteB via REST.
SiteA
Project-name: sitea
Hostname: sitea.ddev.site
Container reference: ddev-sitea-web
SiteB
Project-name: siteb
Hostname: siteb.ddev.site
Container reference: ddev-siteb-web
In order to authenticate with SiteB (tcp or rest), the hostname must be consistent, in this case siteb.ddev.site, so ddev-siteb-web does not work.
My current workaround is to use the SiteB hostname in REST calls from SiteA AND add internal IP to /etc/hosts on SiteA web (something like 172.1.0.1 siteb.ddev.site). I'm looking for a better solution because the hosts configuration is lost when I stop/restart SiteA and/or the IP changes when I stop/restart SiteB.
One theoretical option is a configuration setting that specifies another running docker instance and automatically adds that IP address and hostname to the integrated site's /etc/hosts file.
Thanks!
Different projects can talk to each other in two ways.
The first way is by using the container name directly, and I think that's what you were doing here.
But there's an alternate way (see FAQ - latest. You just need to add a docker-compose.comm.yaml to the client project's .ddev directory like this:
version: '3.6'
services:
web:
external_links:
- "ddev-router:otherproject.ddev.site"
That way you can use the canonical name of the other site for communications. This only works for HTTP/S traffic, because it's going through the ddev-router, which is a reverse proxy.

Problem deploying nginx on heroku in front of server on Go

I want to deploy to heroku nginx as a reverse proxy in front of my Go application.
I made a config file for nginx, but its samples presented here https://github.com/heroku/heroku-buildpack-nginx do not give an understanding of which port to specify in the proxy_pass directive to redirect to my application. These examples use unix socket listening instead of http.
upstream app_server {
server unix: /tmp/nginx.socket fail_timeout = 0;
}
But my application is running over http.
In addition, heroku uses random ports, which the application must retrieve from the PORT environment variable. However, now I have set this variable in the config for nginx. What port should my application run on now? If you specify your own port, this will not work, since heroku will say that the port is already busy.
I am completely discouraged by the difficulty of deploying a simple environment for my heroku application. None of the heroku instructions give a comprehensive understanding of how this can be done.
Please guide me on the right path.
P.S. For a couple of days of searching for an answer, I found only a lot of questions similar to mine and not a single answer.
Update.
I did as in this post Springboot application with nginx as proxy deploy on Heroku
But the author did not explain what she set in the APP_PORT variable. I set it to 3001 and got the same thing as here:
Nginx and Heroku. Serving Static Files

Docker on Windows server and multiple websites listening port 80 and 443

When installing ASP.NET Core apps on a windows machine, I used to install the websites within IIS, I used the bindings there to route depending on the URL to the correct web application and I used Letsencrypt to create the SSL certificates.
Now I want to start shipping my applications using Docker. The samples show, how to easily create an ASP.NET Core dockerized project, but that's where most of them end. So in the end I've got an ASP.NET application in my docker running listening on port 5000.
Are there any suggestion or resources showing how to set it up on a production system?
multiple web sites listening on the standard ports 80 and 443 and forwaring to the correct docker image
SSL certificate handling
Setup ngingx as a front end. It is world-class solution, used by top-traffic sites as a front-end for incoming requests.
Among other features it does:
Redirecting based on plenty of rules
SSL management (you can use unencrypted connections behind it)
Load balancing
It is free and available as docker image.
So, you open only ngingx outside your docker network, and make it route all your traffic inside.
Setup reverse proxy like nginx, even in IIS also you redirect to corresponding docker service having a particular port. You can fan out traffic to respective ports.
Image: https://blogs.msdn.microsoft.com/friis/2016/08/25/setup-iis-with-url-rewrite-as-a-reverse-proxy-for-real-world-apps/

Laravel 5.4 - Change URL name

So to start explaining things, I would first like to enumerate what I'm using.
I'm using Laravel 5.4 and XAMPP.
I want to deploy my project in a localhost only. So we're starting the project's server with php artisan serve --host 192.168.254.11 --port 80 where the IP stated there is the IP of the server.
In that way, we have to type the url 192.168.254.11 to access the project. Is there any way to change that without getting an online server?
EDIT: I managed to change it with tweaking of hosts file. But for it to be possible to be accessed by the other users, they have to do the same. Is there a way to only change the hosts file of the server and still be accessible by other users at the same time?

link_to strips port from site hosted in container

This is a bit of a tricky situation. I'm testing deployment of a Laravel application which I've recently containerised. I've made a container based on php, which runs Apache inside itself to serve the application. If I simply run this container, bound to port 5000, then link_to('/login') correctly generates a link pointing to localhost:5000/login.
However, now I'm testing an actual deployment scenario, where this container is running behind an nginx load balancer. I've set up a VM using Vagrant, which is running two containers: one for the nginx load balancer, and one for the Apache/Laravel application. I access the VM's port 80 on my host's port 7000.
In this situation, link_to('/login') now generates links pointing to localhost/login. Where did the port go missing? It should link to localhost:7000/login, because that's the port I'm accessing the page on.
How can I debug this? I've tried looking into the implementation of link_to, but I suspect the problem is elsewhere.
EDIT
I've just discovered that in addition, if I serve the site over HTTPS (terminated at nginx; Apache still does everything over HTTP), this is also stripped from links created by link_to. Instead of https://localhost:7443/login, the link looks like localhost/login.
The solution is to use something like fideloper/proxy to properly handle the proxy headers added by Nginx. I thought I had done this, but I'd forgotten to add the facade to app/config/app.php.

Resources