When the Apache is installed directly on the host, I add an internal hostname in "C:\Windows\System32\drivers\etc\hosts" and using virtual host to easily access different projects locally say: http://foo.test and http://bar.test.
Using the docker container for each project I can access the project by assigning a host port in the docker-compose file.
I hope that docker may have some internal tools to achieve access via hostname to containers.
Using a reverse proxy can be a solution as described in these relatively old but brilliant articles.
https://www.alexecollins.com/developing-with-docker-proxy-container/
http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/
But because I believe this is a very common development requirement, I hope Docker has something builtin to address it.
My approach to this problem is the following. Consider I have container A and B both running a webserver. I simply add a reverse proxy on my local machine which looks at the hostname and then proxies to the respective containers.
But instead of proxying through the hard-coded ip addresses, I proxy through the local ports. So instead of binding both your containers to port 80, bind them to a random local port (e.g., 4041) and proxy over that. That way you decouple the container IP from your host.
My nginx file looks like this then:
server {
server_name example.com # Add <host lan ip> example.com to your /etc/hosts file
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade; # These two lines ensure that the
proxy_set_header Connection "Upgrade"; # a WebSocket is used
proxy_pass http://localhost:4041/;
}
<snip>
Adding multiple containers then just means you have to edit 1 nginx proxy file, and bind a port to your local machine. No coupling between Docker ip's and your local hosts file.
Related
Trying to get the two to work together. Is there something I'm missing or way to debug why it's not working?
Edited .devilbox/nginx.yml as suggested here although trying to contain it to path: wsapp
---
###
### Basic vHost skeleton
###
vhost: |
server {
listen __PORT____DEFAULT_VHOST__;
server_name __VHOST_NAME__ *.__VHOST_NAME__;
access_log "__ACCESS_LOG__" combined;
error_log "__ERROR_LOG__" warn;
# Reverse Proxy definition (Ensure to adjust the port, currently '8000')
location /wsapp/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://php:6001;
}
__REDIRECT__
__SSL__
__VHOST_DOCROOT__
__VHOST_RPROXY__
__PHP_FPM__
__ALIASES__
__DENIES__
__SERVER_STATUS__
# Custom directives
__CUSTOM__
}
Installed laravel-websockets and configured to use '/wsapp'
Visit the dashboard to test:
https://example.local/laravel-websockets
But console has error:
Firefox can’t establish a connection to the server at
wss://example.local:6001/wsapp/app/a558686cac00228eb003?protocol=7&client=js&version=4.3.1&flash=false.
2 pusher.min.js:8:6335 The connection to
wss://example.local:6001/wsapp/app/a558686cac00228eb003?protocol=7&client=js&version=4.3.1&flash=false
was interrupted while the page was loading. pusher.min.js:8:6335
I've Created a Setup that works...
first you need 2 domains in devilbox...
For you Laravel App (example.local)
For you Laravel Websocket (socket.example.local)
on your socket.example.local directory...
create htdocs and .devilbox here you'll add your nginx.yml file
when you try to connect to your socket.
don't use the port anymore...
and don't isolate the socket to /wsapp anymore...
use socket.example.local in .env PUSHER_HOST value
run your laravel websocket on example.local...
visit /laravel-websockets dashboard... remove the port value then click connect
I don't suggest you'll serve your socket in /wsapp because it's hard to configure nginx to serve 2 apps... (it's hard for me, maybe someone more expert on nginx can suggest something regarding this setup)
but that's my solution... if you didn't understand, please do comment
I've only just got into Laravel/Vue3 so I'm working off the basics. However, I have an existing Docker ecosystem that I use for local dev and an nginx reverse proxy to keep my many projects separate.
I'm having trouble getting HMR working and even more trouble finding appropriate documentation on how to configure Vite and Nginx so I can have a single HTTPS entry point in nginx and proxy back to Laravel and Vite.
The build is based on https://github.com/laravel-presets/inertia/tree/boilerplate.
For completeness, this is the package.json, just in case it changes:
{
"private": true,
"scripts": {
"dev": "vite",
"build": "vite build"
},
"devDependencies": {
"#vitejs/plugin-vue": "^2.3.1",
"#vue/compiler-sfc": "^3.2.33",
"autoprefixer": "^10.4.5",
"postcss": "^8.4.12",
"tailwindcss": "^3.0.24",
"vite": "^2.9.5",
"vite-plugin-laravel": "^0.2.0-beta.10"
},
"dependencies": {
"vue": "^3.2.31",
"#inertiajs/inertia": "^0.11.0",
"#inertiajs/inertia-vue3": "^0.6.0"
}
}
To keep things simple, I'm going to try and get it working under HTTP only and deal with HTTPS later.
Because I'm running the dev server in a container, I've set server.host to 0.0.0.0 in vite.config.ts and server.hmr.clientPort to 80. This will allow connections to the dev server from outside the container and hopefully realize that the public port is 80, instead of the default 3000.
I've tried setting the DEV_SERVER_URL to be the same as the APP_URL so that all traffic from the public site goes to the same place. But I'm not sure what the nginx side of things should look like.
I've also tried setting the DEV_SERVER_URL to be http://0.0.0.0:3000/ so I can see what traffic is trying to be generated. This almost works, but is grossly wrong. It fails when it comes to ws://0.0.0.0/ communications, and would not be appropriate when it comes to HTTPS.
I have noticed calls to /__vite_plugin, which I'm going to assume is the default ping_url that would normally be set in config/vite.php.
Looking for guidance on which nginx locations for should forward to the Laravel port and which locations should forward to the Vite port, and what that should look like so that web socket communications is also catered for.
I've seen discussions that Vite 3 may make this setup easier, but I'd like to deal with what is available right now.
The answer appears to be in knowing which directories to proxy to Vite and being able to isolate the web socket used for HMR.
To that end, you will want to do the following:
Ensure that your .env APP_URL and DEV_SERVER_URL match.
In your vite.config.ts, ensure that the server.host is '0.0.0.0' so that connections can be accepted from outside of the container.
In your vite.config.ts specify a base such as '/app/' so that all HMR traffic can be isolated and redirected to the Vite server while you are running npm run dev. You may wish to use something else if that path may clash with real paths in your Laravel or Vite app, like /_dev/ or /_vite'.
In your config/vite.php set a value for ping_url as http://localhost:3000. This allows Laravel to ping the Vite server locally so that the manifest should not be used and the Vite server will be used. This also assumes that ping_before_using_manifest is set to true.
Lastly, you want to configure your nginx proxy so that a number of locations are specifically proxied to the Vite server, and the rest goes to the Laravel server.
I am not an Nginx expert, so there may be a way to declare the following succinctly.
Sample Nginx server entry
# Some standard proxy variables
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
map $scheme $proxy_x_forwarded_ssl {
default off;
https off;
}
server {
listen *:80;
server vite-inertia-vue-app.test;
/* abridged version that does not include gzip_types, resolver, *_log and other headers */
location ^~ /resources/ {
proxy_pass http://198.18.0.1:3000;
include /etc/nginx/vite-inertia-vue-app.test.include;
}
location ^~ /#vite {
proxy_pass http://198.18.0.1:3000;
include /etc/nginx/vite-inertia-vue-app.test.include;
}
location ^~ /app/ {
proxy_pass http://198.18.0.1:3000;
include /etc/nginx/vite-inertia-vue-app.test.include;
}
location / {
proxy_pass http://198.18.0.1:8082;
include /etc/nginx/vite-inertia-vue-app.test.include;
}
}
vite-inertia-vue-app.test.include to include common proxy settings
proxy_read_timeout 190;
proxy_connect_timeout 3;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
proxy_set_header Proxy "";
My Nginx instance runs in a local Docker Swarm and I use a loopback interface (198.18.0.1) to hit open ports on my machine. Your mileage may vary. Port 3000 is for the Vite server. Port 9082 is for the Laravel server.
At some point, I may investigate using the hostname as it is declared in the docker-compose stack, though I'm not too sure how well this holds up when communicating between Docker Swarm and a regular container stack. The point would be not to have to allocate unique ports for the Laravel and Vite servers if I ended up running multiple projects at the same time.
Entry points /#vite and /resources are for when the app initially launches, and these are used by the script and link tags in the header. After that, all HMR activities use /app/.
The next challenge will be adding a self-signed cert, as I plan to integrate some Azure B2C sign in, but I think that may just involve updating the Nginx config to cater for TLS and update APP_URL and DEV_SERVER_URL in the .env to match.
My web stack is composed of (nginx (port: 29090) -> tomcat)
nginx act as reverse proxy, and tomcat host 2 webapps1. For Authentication (using netflix zuul ) - running on port 29091 2. SensorThings API server - running on port 29101
This below request is passed using zuul.route.sensor.url=http://localhost:29090/sensor-internal
Below is nginx.conf block
location /sensor-internal/ {
include cors_support;
rewrite ^(/sensor/)(.*)$ SensorThingsServer-1.0/v1.0/$2 break;
proxy_redirect off;
proxy_set_header Host $host;
rewrite_log on;
}
I want to replace the URL
http://localhost:29090/sensor/xxxx(n)/yyyy(m)
to
http://localhost:29101/SensorThingsServer-1.0/v1.0/xxxx(n)/yyyy(m)
See change in port and replace sensor with STS-1.0/v1.0/
I believe the above block will not work for port change. Please guide.
You should describe separate location /sensor/ and perform rewriting there, because location /sensor-internal/ you have defined does not serve /sensor/* request.
location /sensor/ {
rewrite ^/(/sensor/)(.*)$ http://localhost:29101/SensorThingsServer-1.0/v1.0/$2 break;
rewrite_log on;
}
Is it possible to access an docker service from an external device?
I built the service via fig and exposed the port 3000. I use fig with docker-osx, so docker is running inside a virtualbox.
Now I need to access the service provided from an external device (i.e. a mobile phone or tablet).
At the moment I could only access the service with localdocker:3000 from the machine hosting the VirtualBox-Environment.
For those using OSX (and Windows) for testing, Docker creates a virtual machine; this works a little differently than running on a Linux-based system.
Try the following:
docker-machine ip
This will return the virtual machine's IP. In my example, it's
192.168.99.100
Running docker ps will show you the port mappings (cleaned up the table below a bit)
$ docker ps
CONTAINER ID IMAGE STATUS PORTS NAMES
42f88ac00e6f nginx-local Up 30 seconds 0.0.0.0:32778->80/tcp
0.0.0.0:32778->80/tcp means docker is mapping 32778 (a randomly assigned port) on my machine (in this case the virtual machine) to my container's port 80.
You can also get this information from docker port 42f88ac00e6f 80 (42f88ac00e6f being the container ID or name)
In order to access nginx on the container, I can now use the virtual machine's ip:32778
http://192.168.99.100:32778/ will forward to my docker container's 80 port (I use this to test locally)
Obviously, the port above will not be accessible from the network but you can configure your firewall to forward to it =)
I suggest adding a port forwarding rule to the VirtualBox VM settings.
Open the VM settings => Network tab => Adapter 1. By default it is attached to NAT.
Press Port forwarding button, then add a new rule.
The Host IP should be your computer IP address. Could be also 127.0.0.1, but then it will be seen only on your computer.
For the Host Port value you will need to experiment a bit - needs to be both unused and allowed by the computer firewall.
Leave the Guest IP empty.
The Guest Port should be 3000, as in your question.
After that, it should be accessible from the local network, address http://HOST_IP:HOST_PORT
You'll have to tell your local machine to listen for incoming connections on that port and then forward those requests on to your docker container.
Nginx is pretty good at this, and a simple config like this:
/etc/nginx/sites-enabled/your-file.conf
server {
listen 3000;
server_name YOUR_IP_ADDRESS;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
location / {
proxy_pass http://127.0.0.1:3000;
}
}
Would work fine if your phone / tablet hits http://YOUR_IP_ADDRESS:3000/
For MacOs users.
it seems like sudo ifconfig lo0 alias 10.254.254.254 will do the magic.
you can access external IP host (10.254.254.254) from container
You should be able to access the boot2docker vm by using the IP address reported by book2docker ip.
I installed jenkins on my server and I want to protected it with nginx http auth so that requests to:
http://my_domain.com:8080
http://ci.my_domain.com
will be protected except one location:
http://ci.my_domain.com/job/my_job/build
needed to trigger build. I am kinda new to nginx so I stuck with nginx config for that.
upstream jenkins {
server 127.0.0.1:8080;
}
server {
listen x.x.x.x:8080;
server_name *.*;
location '/' {
proxy_pass http://jenkins;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
auth_basic "Restricted";
auth_basic_user_file /path/.htpasswd;
}
}
I tried smth like above config but when I visit http://my_domain.com:8080 there is no http auth.
Finally I figured out how to solve this problem. At first we need to uncheck "Enable security" option at Manage Jenkins page. With security disabled we can trigger our jobs with requests like http://ci.your_domain.com/job/job_name/build.
If you want to add token to trigger URL we need to Enable Security, choose "Project-based Matrix Authorization Strategy" and give Admin rights to Anonymous user. After it in Configure page of your project will be "Trigger builds remotely" option where you can specify token so your request will look like JENKINS_URL/job/onru/build?token=TOKEN_NAME
So with disabled security we need to protect http://ci.your_domain.com with nginx http_auth except urls like /job/job_name/build'.
And of course we need to hide 8080 port from external requests. Since my server is on Ubuntu I can use iptables firewall:
iptables -A INPUT -p tcp --dport 8080 -s localhost -j ACCEPT
iptables -A INPUT -p tcp --dport 8080 -j DROP
But! On ubuntu (I am not sure about other linux oses) iptables will disappear after reboot. So we need to save them with:
iptables-save
And it is not the end. With this command we just get a file with iptables. On startup we need to load iptables and the easiest way is to use 'uptables-persistent' package:
sudo apt-get install iptables-persistent
iptables-save > /etc/iptables/rules
Take a closer look at iptables if needed https://help.ubuntu.com/community/IptablesHowTo#Saving_iptables and good luck with Jenkins!
And there is good example for running jenkins on subdomain of your server: https://wiki.jenkins-ci.org/display/JENKINS/Running+Hudson+behind+Nginx