Caddy allow HTTP with Api Platform - api-platform.com

I know this question has been asked many times:
Caddy - How to disable https only for one domain
Disable caddy ssl to enable a deploy to Cloud Run through Gitlab CI
Caddy - Setting HTTPS on local domain
How can I disable TLS when running from Docker?
How to serve both http and https with Caddy?
but here is my problem.
Setup
I created a new Api Platform project following their documentation.
The easiest and most powerful way to get started is to download the API Platform distribution
I downloaded the release 2.5.6 in which we can find:
a docker-compose
a Dockerfile
a Caddyfile
and many others files.
docker-compose
I slightly change the docker compose file by removing the pwa service and PostgreSQL:
version: "3.4"
services:
php:
build:
context: ./api
target: api_platform_php
restart: unless-stopped
env_file:
- api/.env
volumes:
- php_socket:/var/run/php
healthcheck:
interval: 10s
timeout: 3s
retries: 3
start_period: 30s
caddy:
build:
context: api/
target: api_platform_caddy
env_file:
- api/.env
depends_on:
- php
environment:
MERCURE_PUBLISHER_JWT_KEY: ${MERCURE_PUBLISHER_JWT_KEY:-!ChangeMe!}
MERCURE_SUBSCRIBER_JWT_KEY: ${MERCURE_SUBSCRIBER_JWT_KEY:-!ChangeMe!}
restart: unless-stopped
volumes:
- php_socket:/var/run/php
- caddy_data:/data
- caddy_config:/config
ports:
# HTTP
- target: 80
published: 80
protocol: tcp
# HTTPS
- target: 443
published: 443
protocol: tcp
# HTTP/3
- target: 443
published: 443
protocol: udp
volumes:
php_socket:
caddy_data:
caddy_config:
Dockerfile
No changes
Caddyfile
Slight change by commenting the line reverse_proxy #pwa http://{$PWA_UPSTREAM}
{
# Debug
{$DEBUG}
# HTTP/3 support
servers {
protocol {
experimental_http3
}
}
}
{$SERVER_NAME}
log
# Matches requests for HTML documents, for static files and for Next.js files,
# except for known API paths and paths with extensions handled by API Platform
#pwa expression `(
{header.Accept}.matches("\\btext/html\\b")
&& !{path}.matches("(?i)(?:^/docs|^/graphql|^/bundles/|^/_profiler|^/_wdt|\\.(?:json|html$|csv$|ya?ml$|xml$))")
)
|| {path} == "/favicon.ico"
|| {path} == "/manifest.json"
|| {path} == "/robots.txt"
|| {path}.startsWith("/_next")
|| {path}.startsWith("/sitemap")`
route {
root * /srv/api/public
mercure {
# Transport to use (default to Bolt)
transport_url {$MERCURE_TRANSPORT_URL:bolt:///data/mercure.db}
# Publisher JWT key
publisher_jwt {env.MERCURE_PUBLISHER_JWT_KEY} {env.MERCURE_PUBLISHER_JWT_ALG}
# Subscriber JWT key
subscriber_jwt {env.MERCURE_SUBSCRIBER_JWT_KEY} {env.MERCURE_SUBSCRIBER_JWT_ALG}
# Allow anonymous subscribers (double-check that it's what you want)
anonymous
# Enable the subscription API (double-check that it's what you want)
subscriptions
# Extra directives
{$MERCURE_EXTRA_DIRECTIVES}
}
vulcain
push
# Add links to the API docs and to the Mercure Hub if not set explicitly (e.g. the PWA)
header ?Link `</docs.jsonld>; rel="http://www.w3.org/ns/hydra/core#apiDocumentation", </.well-known/mercure>; rel="mercure"`
# Disable Google FLOC tracking if not enabled explicitly: https://plausible.io/blog/google-floc
header ?Permissions-Policy "interest-cohort=()"
# Comment the following line if you don't want Next.js to catch requests for HTML documents.
# In this case, they will be handled by the PHP app.
# reverse_proxy #pwa http://{$PWA_UPSTREAM}
php_fastcgi unix//var/run/php/php-fpm.sock
encode zstd gzip
file_server
}
Result
I can acccss my website on https://localhost but I can't access it without https because caddy automatically redirect http traffic to https
Problem 1
When I try the solution auto_https it doesn't work.
Here what I tried:
Adding the auto_https disable_redirects or auto_https off
{
auto_https off
# Debug
{$DEBUG}
# HTTP/3 support
servers {
protocol {
experimental_http3
}
}
//...
}
When I try to access http://localhost:80, I got redirect to https://localhost and I got This site can’t provide a secure connection
Problem 2
When I try the solution:
Not providing any hostnames or IP addresses in the config
I remove {$SERVER_NAME} from my Caddyfile
When I try to access http://localhost:80, I got redirect to https://localhost and I got This site can’t provide a secure connection
Problem 3
When I try the solution:
Listening exclusively on the HTTP port
services:
# ...
caddy:
build:
context: api/
target: api_platform_caddy
#...
ports:
# HTTP
- target: 80
published: 80
protocol: tcp
# HTTPS
#- target: 443
# published: 443
# protocol: tcp
# HTTP/3
#- target: 443
# published: 443
# protocol: udp
When I try to access http://localhost:80, I got redirect to https://localhost and I got This site can’t be reached
Question
How can I allow http on my caddy server (and still keep my configuration with mercure in my Caddyfile) ?

I found a solution here:
https://github.com/caddyserver/caddy/issues/3219#issuecomment-608236439
Caddyfile
{
http_port 8080
# Debug
{$DEBUG}
# HTTP/3 support
servers {
protocol {
experimental_http3
}
}
//...
}
docker-compose
services:
caddy:
# ...
ports:
# HTTP
- target: 80
published: 80
protocol: tcp
- target: 8080
published: 8080
protocol: tcp
# HTTPS
- target: 443
published: 443
protocol: tcp
# HTTP/3
- target: 443
published: 443
protocol: udp

Related

Docker: Ports are not available: exposing port TCP 0.0.0.0:61615 -> 0.0.0.0:0: listen tcp 0.0.0.0:61615

I am trying to use ActiveMQ in Docker, but I've started to get this error when starting the container.
Error invoking remote method 'docker-start-container': Error: (HTTP code 500) server error - Ports are not available: exposing port TCP 0.0.0.0:61613 -> 0.0.0.0:0: listen tcp 0.0.0.0:61613: bind: An attempt was made to access a socket in a way forbidden by its access permissions.
I could not find any service that is using these ports (maybe I looking in a wrong way).
I'm seeing that people generally suggest to restart winnat, but I am not sure if it's a good idea and I'd like to know if there are any other solutions to this problem.
Also changing ports ranges won't work in my case since they are already set to a suggested value:
Protocol tcp Dynamic Port Range
---------------------------------
Start Port : 49152
Number of Ports : 16384
Here is a part of my docker-compose file:
version: '3.5'
networks:
test-net:
ipam:
driver: default
config:
- subnet: 172.33.1.0/24
services:
activemq:
image: privat_artifactory
container_name: test-activemq
restart: always
networks:
- test-net
ports:
- "8161:8161"
- "1883:1883"
- "5672:5672"
- "61613-61616:61613-61616"

Using a kubernetes ingress to support multiple sub domains

I have a domain foobar. When I started my project, I knew I would have my webserver handling traffic for foobar.com. Also, I plan on having an elasticsearch server I wanted running at es.foobar.com. I purchased my domain at GoDaddy and I (maybe prematurely) purchased a single site certificate for foobar.com. I can't change this certificate to a wildcard cert. I would have to purchase a new one. I have my DNS record routing traffic for that simple URL. I'm managing everything using Kubernetes.
Questions:
Is it possible to use my simple single-site certificate for the main site and subdomains like my elasticsearch server or do I need to purchase another single-site certificate specifically for the elasticsearch server? I checked earlier and GoDaddy wants $350 for the multisite one.
ElasticSearch complicates this somewhat since if it's being accessed at es.foobar.com and the cert is for foobar.com it's going to reject any requests, right? Elasticsearch needs a cert in order to have solid security.
Is it possible to use my simple single-site certificate for the main site and subdomains?
To achieve your goal, you can use Name based virtual hosting ingress, since most likely your webserver foobar.com and elasticsearch es.foobar.com work on different ports and will be available under the same IP.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: name-virtual-host-ingress
spec:
rules:
- host: foobar.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: webserver
port:
number: 80
- host: es.foobar.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: elastic
port:
number: 9200-9300 #http.port parametr in elastic config
It can also be implemented using TLS private key and certificate and and creating a file for TLS. This is possible for just one level, like *.foobar.com.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: name-virtual-host-ingress
spec:
tls:
- hosts:
- foobar.com
- es.foobar.com
secretName: "foobar-secret-tls"
rules:
- host: foobar.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: webserver
port:
number: 80
- host: es.foobar.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: elastic
port:
number: 9200-9300 #http.port parametr in elastic config
Either you need to get a wildcard or separate certificate for another domain.

Api Gateway adding "localhost" to address on docker-compose

I'm trying to deploy SpringBoot microservices using docker-compose but I'm having a problem with API Gateway.
If I run the project locally it works ok, even if I deploy project using docker-compose but API Gateway locally, it works ok, so problem has to be "dockerizing" the API Gateway service.
Doing docker logs <container> it shows:
io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: localhost/127.0.0.1:8083
Is obvious there is a problem on host localhost/127.0.0.1. Why Gateway is trying to point a "repeated" address?.
docker-compose.yml looks like this:
version: '3.8'
services:
# more services
api-gateway:
build: ./path_to_dockerfile
depends_on:
- eureka-server
environment:
- eureka.client.serviceUrl.defaultZone=http://eureka-server:8761/eureka/
restart: always
container_name: gateway
ports:
- '9000:9000'
Dockerfile is as simple as this
FROM openjdk:8-jdk-alpine
ADD target/apigateway-0.0.1-SNAPSHOT.jar app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
And application.yml:
server:
port: 9000
spring:
application:
name: Api-Gateway-Service
cloud:
gateway:
default-filters:
- DedupeResponseHeader=Access-Control-Allow-Credentials Access-Control-Allow-Origin, RETAIN_UNIQUE
globalcors:
# cors config
routes:
- id: <name>-microservice
uri: http://localhost:8083
predicates:
- Path=/<path>/**
- Method=GET,POST,DELETE,PUT,OPTIONS
# more routes on different ports
eureka:
# eureka config
So, why is adding "localhost" or "127.0.0.1" and calling twice?
Thanks in advance.
I don't think Connection refused: localhost/127.0.0.1:8083 means that it was trying to add or call localhost twice. It is just the way it shows the error.
In your application.yml, try changing uri to the name you used for your microservice inside docker-compose file.
routes:
- id: <name>-microservice
uri: <YOUR_SERVICE_NAME>
I guess the problem is that docker doesn't support network communication between containers by default. You can connect to the 8083 port from your host but not another container. If so, you should create a network and contact the container and network.

Can't connect to Docker sql for Windows by hostname

I have next docker compose file (part of it)
version: '3.7'
services:
# DB Server ==========================================================================================================
mssqlsimple:
image: microsoft/mssql-server-windows-developer:2017-latest
volumes:
- ".\\Prm.DbContext.Application\\FullInit:C:\\data"
container_name: pbpmssqlsimple
ports:
#- "1403:1433"
- target: 1433
published: 1403
protocol: tcp
mode: host
networks:
- backend
environment:
ACCEPT_EULA: "Y"
SA_PASSWORD: "SP_116b626d-ed7e-4f5d123#"
...
after command docker-compose up i have instance of sql server and can to connect to it by IP (172.21.69.132) or alias id (0338726df5ba) from docker config.
but i can't connect by host name mssqlsimple (or pbpmssqlsimple)
fragment config json
I tried to do it, but failed
disable windows firewall
connect with port mssqlsimple, 1403
used simple syntax for ports "1403:1433"
Tell me please how to solve my problem

Docker multiple sites on different ports

Right now, I have a single static site running in a docker container, being served on port 80. This plays nicely, since 80 is the default for public traffic.
So in my /etc/hosts file I can add an entry like like 127.0.0.1 example.dev and navigating to example.dev and it automatically uses port 80.
What if I need to add an additional 2-3 dockerized dev sites to my environment? what would be the best course of action to prevent having to access these sites solely by port, i.e. 81,82,83,etc? Also, it seems under these circumstances, I would be limited to being able to rewrite only the dev site tied to port 80 to a specific hostname? is there a way to overcome this? what is the best way to manage multiple docker sites from different ports?
Note, I was hoping to access the docker container via the container's IP address i.e. 172.21.0.4 and then simply add a hostname entry to my hosts file, but accessing containers by IP address doesn't work on Mac.
docker-compose.yml
version: '3'
services:
mysql:
container_name: mysql
build: ./mysql
environment:
- MYSQL_DATABASE=example_dev
- MYSQL_USER=test
- MYSQL_PASSWORD=test
- MYSQL_ROOT_PASSWORD=0000
ports:
- 3307:3306
phpmyadmin:
container_name: myadmin
image: phpmyadmin/phpmyadmin
ports:
- 8080:80
links:
- "mysql:db"
apache_site1:
container_name: apache_site1
build: ./php-apache
ports:
- 80:80
volumes:
- ../:/var/www/html
links:
- mysql:db
./php-apache/Dockerfile
FROM php:7-apache
COPY ./config/php.ini /usr/local/etc/php/
EXPOSE 80
thanks in advance
Your problem is best handled using a reverse proxy such as nginx. You can run the reverse proxy on port 80 and then configure it to route requests to the specific site. For example,
http://example.dev/site1 route to site1 at http://example.dev:8080
http://example.dev/site2 route to site2 at http://example.dev:8081
And thus you run your sites on ports 8080, 8081...
Specific solution
Based on the docker-compose file in the question. Edit the docker-compose.yml file, adding this service:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
Then, change the apache_site1 service in this way:
apache_site1:
container_name: apache_site1
build: ./php-apache
volumes:
- ../:/var/www/html
links:
- mysql:db
environment:
- VIRTUAL_HOST=apache-1.dev
Run the docker-compose file and check that your apache-1 website is reachable:
curl -H 'Host: apache-1.dev' localhost
Or use the Chrome extension as described below.
More websites
When you need to add more websites, just add an apache_site2 entry like you want and be sure to set a VIRTUAL_HOST environment variable in its definition.
Generic solution
Use a single nginx with multiple server entries
If you don't want to use a reverse proxy with a subpath for each website,
you can setup a nginx reverse proxy listening on you host 80 port, with one server entry for each site/container you have.
server {
listen 80;
server_name example-1.dev;
location / {
proxy_pass http://website-1-container:port;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
server {
listen 80;
server_name example-2.dev;
location / {
proxy_pass http://website-2-container:port;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
... and so on
Then, you can use the Host header to request different domains to your localhost without changing your /etc/hosts:
curl -H 'Host: example-2.dev' localhost
If you're doing web development, and so you need to see web pages, you can use a browser extension to customize the Host header at each page request.
Already made solution with nginx and docker services
Use a docker-compose file with all your and use the jwilder/nginx-proxy image that will auto configure a nginx proxy for you using environment variables. This is an example docker-compose.ymlfile:
version: "3"
services:
website-1:
image: website-1:latest
environment:
- VIRTUAL_HOST=example-1.dev
website-2:
image: website-2:latest
environment:
- VIRTUAL_HOST=example-2.dev
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
Apache solution
Use apache virtual hosts to setup multiple websites in the same way described for nginx. Be sure to enable the Apache ProxyPreserveHost Directive to forward the Host header to the proxied server.

Resources