link_to strips port from site hosted in container - laravel

This is a bit of a tricky situation. I'm testing deployment of a Laravel application which I've recently containerised. I've made a container based on php, which runs Apache inside itself to serve the application. If I simply run this container, bound to port 5000, then link_to('/login') correctly generates a link pointing to localhost:5000/login.
However, now I'm testing an actual deployment scenario, where this container is running behind an nginx load balancer. I've set up a VM using Vagrant, which is running two containers: one for the nginx load balancer, and one for the Apache/Laravel application. I access the VM's port 80 on my host's port 7000.
In this situation, link_to('/login') now generates links pointing to localhost/login. Where did the port go missing? It should link to localhost:7000/login, because that's the port I'm accessing the page on.
How can I debug this? I've tried looking into the implementation of link_to, but I suspect the problem is elsewhere.
EDIT
I've just discovered that in addition, if I serve the site over HTTPS (terminated at nginx; Apache still does everything over HTTP), this is also stripped from links created by link_to. Instead of https://localhost:7443/login, the link looks like localhost/login.

The solution is to use something like fideloper/proxy to properly handle the proxy headers added by Nginx. I thought I had done this, but I'd forgotten to add the facade to app/config/app.php.

Related

Fetch data from gatServerProps of NextJs app when another api server is also running in localhost

According NextJs Documentations:
You should not use fetch() to call an API route in getServerSideProps. Instead, directly import the logic used inside your API route. You may need to slightly refactor your code for this approach.
Fetching from an external API is fine!
So we cannot use NextJs built-in APIs in getStaticProps or getServerSidePropsbut when I'm going to use another API service that is based on Laravel Framework as the back server and fetch it by Axios on the getServerSideProps function, I get Error: connect ECONNREFUSED 127.0.0.1:8080 error.
It should also be noted that everything is fine if the API server is addressed out of our development machine. In Other words, It will face when it's the development environment and both Laravel backend server and NextJs front-end server locate at localhost.
Could you help me out finding a solution for this problem?
When using localhost or 127.0.0.1 inside a docker container, that points to that docker container only, not the host computer.
There are two pretty easy solutions.
Create a docker network, add both containers to that, and use container name instead of ip (https://www.tutorialworks.com/container-networking/)
Use host networking for this container: https://docs.docker.com/network/host/
Edit: Added a link for a tutorial on how to create and use docker networks
So, as #tperamaki's answer already mentions: "When using localhost or 127.0.0.1 inside a docker container, that points to that docker container only, not the host computer."
You can use the ip of your machine in your local network. By example, 196.168.0.10:8080 instead 127.0.0.1:8080.
But you also can connect to the special DNS name host.docker.internal which resolves to the internal IP address used by the host.
In your case just add the port where the othe container is listening:
http://host.docker.internal:8080
In this section of the documentation Networking features in Docker Desktop for Mac, they explain how to connect from a container to a service on the host. Note that a mac is mentioned there, but I tried it on a linux distro and it also works (also in this other answer it is mentioned that it works for windows).

Move application from homestead to docker

My application consists of three domains:
example.com
admin.example.com
partner.example.com
All of these domains are handled by the same Laravel app. Each domain has its own controllers and view. Models and other core functionalities are shared between all three domains.
Currently my local dev-environment is build with Homestead (based on Vagrant), where each local domain (example.test, admin.example.test and partner.example.test) points to the same directory (e.g. /home/vagrant/app/public).
Because of deployment problems regarding different versions of OS, NPM, PHP, etc. I want to move to docker. I've read a lot of articles about multiple domains or apps with docker. Best practice seems to be to set up an Nginx reverse proxy which redirects all incoming requests to the desired app. Unfortunately, I haven't found examples for my case where all domains point to the same application.
If possible I would avoid having the same repository cloned three times for each docker container running one specific part of the app.
So what would be the best approach to set up a docker environment?
I created a simple gist for you to look at of how I would do it
https://gist.github.com/karlisabele/f7d91594c004e227e504473ce2c60508
The nginx config file is based on Laravel documetation (https://laravel.com/docs/5.8/deployment#nginx) and of course in production you would also want to handle SSL and map port 443 as well, but this should serve as POC for you.
Notice that in the nginx configuration I use the php-fpm service name to pass the request to php-fpm container. In docker the service names can be used as host names for corresponding service so the line fastcgi_pass php-fpm:9000; means that you are passing the request to php-fpm containers port 9000 (default port for the fpm image to listen to)
Basically what you want to do is simply define in the nginx that all 3 of your subdomains are handled by the same server configuration. Then nginx simply passes the request to php-fpm to actually process it.
To test, you can just copy the two files from gist in your project directory, replace YOUR_PROJECT_FOLDER in docker-compose.yml file with the actual location of your project (can be simply .:/var/www/html if you place the docker-compose.yml in the root of your project) then run docker-compose up -d. Add the domains to your hosts file (/etc/hosts un linux/mac) and you should be able to visit example.test and see your site.
Note: Depending on where your database is located, you might need to change the host for it if it's localhost at the moment, because it will try to connect to a mysql server from php-fpm container, which of course does not have it's own mysql-server running.

Docker on Windows server and multiple websites listening port 80 and 443

When installing ASP.NET Core apps on a windows machine, I used to install the websites within IIS, I used the bindings there to route depending on the URL to the correct web application and I used Letsencrypt to create the SSL certificates.
Now I want to start shipping my applications using Docker. The samples show, how to easily create an ASP.NET Core dockerized project, but that's where most of them end. So in the end I've got an ASP.NET application in my docker running listening on port 5000.
Are there any suggestion or resources showing how to set it up on a production system?
multiple web sites listening on the standard ports 80 and 443 and forwaring to the correct docker image
SSL certificate handling
Setup ngingx as a front end. It is world-class solution, used by top-traffic sites as a front-end for incoming requests.
Among other features it does:
Redirecting based on plenty of rules
SSL management (you can use unencrypted connections behind it)
Load balancing
It is free and available as docker image.
So, you open only ngingx outside your docker network, and make it route all your traffic inside.
Setup reverse proxy like nginx, even in IIS also you redirect to corresponding docker service having a particular port. You can fan out traffic to respective ports.
Image: https://blogs.msdn.microsoft.com/friis/2016/08/25/setup-iis-with-url-rewrite-as-a-reverse-proxy-for-real-world-apps/

How to point shared load balancer http/https port to specific port of my Jelastic Docker container

My app listens stuff on port 8082 (or whatever one).
I want to configure shared load balancer to route all requests from 443 port (HTTPS) to this port. As far as I understand, this is done by some Jelastic magic during container creation. So far so good, everything worked fine.
But after I've updated base image for my Docker app (from openjre-152 to openjre-171 or something like this) SLB stopped to re-route traffic to my app.
Is there way to change/setup this internal configuration manually without environment re-creation?
You should use JELASTIC_EXPOSE as written in docs, however, it's not clear that your traffic will be redirected and ssl-terminated by SLB.

Port forward requests from 80 to respective ports

I have many spring boot jars running in different ports. Say 9087-9090. I have a domain say
mydomain.com.
I can access mydomain.com:9087/ and use the application. Also mydomain.com:9088/ and use another application but how can i use them just like mydomain.com and still map them to desired ports. What is the technical term for this.
I use digitalocean hosting and have a Ubuntu 14.04 x64 Box. I'm running Java 7 in it.
You need a reverse proxy (a.k.az front end load balancer) with URL rewriting. I'm not sure what you hosting solution offers or permits, but you could try nginx or Apache httpd if you want something running locally. There are also service providers you might be able to use outside your host.

Resources