Communicating between ddev projects via http/s - ddev

I've been using DDev for the last six months or so. It has greatly improved my efficiency. Thanks!
I'm looking for a better way to integrate multiple sites running on separate containers. The recommended solution is to use the internal container references (e.g. ddev-projectname-web). This does not work one of my projects because the destination site relies on a matching hostname for authentication.
Scenario: SiteA communicates with SiteB via REST.
SiteA
Project-name: sitea
Hostname: sitea.ddev.site
Container reference: ddev-sitea-web
SiteB
Project-name: siteb
Hostname: siteb.ddev.site
Container reference: ddev-siteb-web
In order to authenticate with SiteB (tcp or rest), the hostname must be consistent, in this case siteb.ddev.site, so ddev-siteb-web does not work.
My current workaround is to use the SiteB hostname in REST calls from SiteA AND add internal IP to /etc/hosts on SiteA web (something like 172.1.0.1 siteb.ddev.site). I'm looking for a better solution because the hosts configuration is lost when I stop/restart SiteA and/or the IP changes when I stop/restart SiteB.
One theoretical option is a configuration setting that specifies another running docker instance and automatically adds that IP address and hostname to the integrated site's /etc/hosts file.
Thanks!

Different projects can talk to each other in two ways.
The first way is by using the container name directly, and I think that's what you were doing here.
But there's an alternate way (see FAQ - latest. You just need to add a docker-compose.comm.yaml to the client project's .ddev directory like this:
version: '3.6'
services:
web:
external_links:
- "ddev-router:otherproject.ddev.site"
That way you can use the canonical name of the other site for communications. This only works for HTTP/S traffic, because it's going through the ddev-router, which is a reverse proxy.

Related

Is there a possibility to access a docker-compose container from another machine inside the local network?

I'm using WSL2 Ubuntu with Docker CE and Docker-Compose.
I want to access the containers I'm running (mostly Apache/MySQL/Wordpress containers) from my local network (sometimes same, sometimes other machines).
For example:
PC1: 192.168.178.20
PC2: 192.168.178.21
On PC1 is Windows + WSL2-Ubuntu with all the docker containers.
I want to access the containers from the Windows-Browser (Chrome) but also from the browser from PC2 (also Chrome, for mac).
Is this even possible? If yes, how?
I got webpack to work with hot reload from WSL2 but this seems very hard and I don't know where to start.
Is it possible to add DNS-Names for specific containers in my router? for example if you call "example.test" my router forwards to the IP from the Docker-Box?
There are a couple of solutions, some better than others.
Find the port number that your container is MAPPED to your host machine (PC1) and make sure you can browse that way. Then take the same URL to PC2 and try it out and see if it works. Make sure you are using the Fully Qualified Domain Name or IP address so it is resolved to find PC1.
Find the port number that your container is EXPOSED to your host machine (PC1) and make sure you can browse that way. Repeat the process as above.
Use a reverse proxy. I am biased and will say to use Traefik because of its relative simplicity (compared to nginx) to configure. It is just another container. It uses RULES (combination of URL Header, Port Number, Path, etc.) to route incoming connections to services/containers. In your case you will create a rule of URL Header (webapp1.corp.com) and port number (80) and route it to a specific container you have running. Then from either computer browser enter http://webapp1.corp.com the connection will be routed to the specific container. This is a very simplistic answer that is more complicated but you should get the gist.
You mentioned you are running multiple containers, so I recommend you use docker-compose if you aren't already using it.

Move application from homestead to docker

My application consists of three domains:
example.com
admin.example.com
partner.example.com
All of these domains are handled by the same Laravel app. Each domain has its own controllers and view. Models and other core functionalities are shared between all three domains.
Currently my local dev-environment is build with Homestead (based on Vagrant), where each local domain (example.test, admin.example.test and partner.example.test) points to the same directory (e.g. /home/vagrant/app/public).
Because of deployment problems regarding different versions of OS, NPM, PHP, etc. I want to move to docker. I've read a lot of articles about multiple domains or apps with docker. Best practice seems to be to set up an Nginx reverse proxy which redirects all incoming requests to the desired app. Unfortunately, I haven't found examples for my case where all domains point to the same application.
If possible I would avoid having the same repository cloned three times for each docker container running one specific part of the app.
So what would be the best approach to set up a docker environment?
I created a simple gist for you to look at of how I would do it
https://gist.github.com/karlisabele/f7d91594c004e227e504473ce2c60508
The nginx config file is based on Laravel documetation (https://laravel.com/docs/5.8/deployment#nginx) and of course in production you would also want to handle SSL and map port 443 as well, but this should serve as POC for you.
Notice that in the nginx configuration I use the php-fpm service name to pass the request to php-fpm container. In docker the service names can be used as host names for corresponding service so the line fastcgi_pass php-fpm:9000; means that you are passing the request to php-fpm containers port 9000 (default port for the fpm image to listen to)
Basically what you want to do is simply define in the nginx that all 3 of your subdomains are handled by the same server configuration. Then nginx simply passes the request to php-fpm to actually process it.
To test, you can just copy the two files from gist in your project directory, replace YOUR_PROJECT_FOLDER in docker-compose.yml file with the actual location of your project (can be simply .:/var/www/html if you place the docker-compose.yml in the root of your project) then run docker-compose up -d. Add the domains to your hosts file (/etc/hosts un linux/mac) and you should be able to visit example.test and see your site.
Note: Depending on where your database is located, you might need to change the host for it if it's localhost at the moment, because it will try to connect to a mysql server from php-fpm container, which of course does not have it's own mysql-server running.

How to auto configure local domain to point at docker container

My aim is to have a self-contained repository that a user/developer can pull down (on a mac), type docker-compose up -d and have a working development environment via a friendly url like http://myproject.dev/
I have my docker images set up as needed but the local domain is where i've come unstuck. I know this is a bit outside of Docker, as this is a host system thing. But i'm really looking for a way to achieve this without requiring the user to install local apps or make various local system config changes.
Is this something that is achievable, or am i barking up the wrong tree?
**Sorry guys, i missed out some important info. I ideally want to avoid pointing to localhost as this would cause conflicts if/when running multiple projects. So i guess we would need to point to the containers IP, so host entries would need to be dynamic. murky dns waters indeed
I know you said:
... or make various local system config changes.
But if you can lift that requirement, the one change you need to make for local macOS systems would be an addition to /etc/resolver/dev. Then, you can run a DNS server to your docker-compose.yml that automatically adds entries for the services such as https://github.com/ruudud/devdns (specifically, the https://github.com/ruudud/devdns#host-machine--containers bit).
Point a real world DNS record that all your developers can lookup to 127.0.0.1. So myproject.existing.tld would point to 127.0.0.1.
An example is vmware's could application platform:
→ dig +noall +answer vcap.me
vcap.me. 3412 IN A 127.0.0.1
→ dig +noall +answer whatever.vcap.me
vcap.me. 3412 IN A 127.0.0.1
Otherwise your treading into the murky waters of advertising services via
mDNS (or LLMNR) and Zeroconf.

Elasticsearch-logging rc and svc are getting automatically deleted

https://github.com/GoogleCloudPlatform/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch
The cluster is getting automatically deleted by using these configs to create the cluster.
From https://github.com/kubernetes/kubernetes/issues/11435 the solution is to remove
kubernetes.io/cluster-service: "true"
Though without these the elasticsearch is not available through the kubernetes master.
Should i create a pull request to remove the line from the files in the repo so people dont get confused?
Firstly, I'd recommend reformatting future questions so they adhere to the stack overflow guidelines: https://stackoverflow.com/help/how-to-ask.
I'd recommend making Elasticsearch a normal Kubernetes Service. You can expose in one of the following ways:
1. Set service.Type = NodePort and access it via any public ip of node:nodePort
2. Set service.Type = LoadBalancer, this will only work on cloud providers that have loadbalancers
3. Expose the RC directly through a host port (not recommended)
Those are just the common options for accessing a Service, please see the following thread for a more detailed discussion: https://groups.google.com/forum/#!topic/kubernetes-sig-network/B-A_RuqpFWk
It's generally not a good idead to send all external traffic meant for a Kubernetes service through the apiserver. However if you must do so, you can via an endpoint such as:
/api/v1/proxy/namespaces/default/services/nginx:80/
Where default is the namespace, nginx is the name of your service and 80 is the service port (needed to disambiguate multiport services).

Port forward requests from 80 to respective ports

I have many spring boot jars running in different ports. Say 9087-9090. I have a domain say
mydomain.com.
I can access mydomain.com:9087/ and use the application. Also mydomain.com:9088/ and use another application but how can i use them just like mydomain.com and still map them to desired ports. What is the technical term for this.
I use digitalocean hosting and have a Ubuntu 14.04 x64 Box. I'm running Java 7 in it.
You need a reverse proxy (a.k.az front end load balancer) with URL rewriting. I'm not sure what you hosting solution offers or permits, but you could try nginx or Apache httpd if you want something running locally. There are also service providers you might be able to use outside your host.

Resources