I am trying to get a minio server to run on https but everytime i try to run it i get the following error:
{"level":"FATAL","time":"2018-06-15T15:12:19.2189519Z","error":{"message":"The
parameter is incorrect.","source":["cmd\\server-main.go:225:cmd.serverMain()"]}}
I have followed the following guide:
https://docs.minio.io/docs/how-to-secure-access-to-minio-server-with-tls
And tried to generate my own certificate but nothing seems to work... I placed the certificates inside the .minio/certs folder and named them public.crt and private.key. I have tried to re-generate the certs over and over again but I am still getting that error message... If anyone can point me in the right direction, i would greatly appropriate it
Step 1: you can generate the SSL Certificate if you don't have one, for example:
sudo mkdir -p /tmp/.minio/certs
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/.minio/certs/private.key -out /tmp/.minio/certs/public.crt
Step 2: run Minio sever secured by HTTPS. Here I'm using Docker with docker-compose:
docker-compose.yaml:
version: '3'
services:
minio:
image: minio/minio
command: server --address ":443" /data
ports:
- "443:443"
environment:
MINIO_ACCESS_KEY: "YourAccesskey"
MINIO_SECRET_KEY: "YourSecretkey"
volumes:
- /tmp/minio/data:/data
- /tmp/.minio:/root/.minio
Note: here assume that you have a directory on your host, called /tmp/minio/data. If you don't, create it: mkdir -p /tmp/minio/data
Now start the container: docker-compose up
That's it.
Check: You can access your Minio server via HTTPS, see below:
References
https://docs.minio.io/docs/how-to-secure-access-to-minio-server-with-tls
https://docs.minio.io/docs/generate-let-s-encypt-certificate-using-concert-for-minio.html
if you using sudo you must have private.key and public.crt in /root/.minio/certs/. In my case, I must rename my minio.key and minio.crt because minio doesn't want to use them.
Related
I have local server with domain mydomain.com it is just alias to localhost:80
And I want to allow make requests to mydomain.com from my running docker-container.
When I'm trying to request to it I see
cURL error 7: Failed to connect to mydomain.com port 80: Connection refused
My docker-compose.yml
version: '3.8'
services:
nginx:
container_name: project-nginx
image: nginx:1.23.1-alpine
volumes:
- ./docker/nginx/conf.d/default.conf:/etc/nginx/conf.d/default.conf
- ./src:/app
ports:
- ${NGINX_PORT:-81}:80
depends_on:
- project
server:
container_name: project
build:
context: ./
environment:
NODE_MODE: service
APP_ENV: local
APP_DEBUG: 1
ALLOWED_ORIGINS: ${ALLOWED_ORIGINS:-null}
volumes:
- ./src:/app
I'm using docker desktop for Windows
What can I do?
I've tried to add
network_mode: "host"
but it ruins my docker-compose startup
When I'm trying to send request to host.docker.internal I see this:
The requested URL was not found on this server. If you entered
the URL manually please check your spelling and try again.
The host network is not supported on Windows. If you are using Linux containers on Windows, make sure you have switched to Linux containers on Docker Desktop. That uses WSL2, so you should be able to use that in there.
I have a problem in opening minio in the browser. I just created Spring Boot app with the usage of it.
Here is my application.yaml file shown below.
server:
port: 8085
spring:
application:
name: springboot-minio
minio:
endpoint: http://127.0.0.1:9000
port: 9000
accessKey: minioadmin #Login Account
secretKey: minioadmin # Login Password
secure: false
bucket-name: commons # Bucket Name
image-size: 10485760 # Maximum size of picture file
file-size: 1073741824 # Maximum file size
Here is my docker-compose.yaml file shown below.
version: '3.8'
services:
minio:
image: minio/minio:latest
container_name: minio
environment:
MINIO_ROOT_USER: "minioadmin"
MINIO_ROOT_PASSWORD: "minioadmin"
volumes:
- ./data:/data
ports:
- 9000:9000
- 9001:9001
I run it by these commands shown below.
1 ) docker-compose up -d
2 ) docker ps -a
3 ) docker run minio/minio:latest
Here is the result shown below.
C:\Users\host\IdeaProjects\SpringBootMinio>docker run minio/minio:latest
NAME:
minio - High Performance Object Storage
DESCRIPTION:
Build high performance data infrastructure for machine learning, analytics and application data workloads with MinIO
USAGE:
minio [FLAGS] COMMAND [ARGS...]
COMMANDS:
server start object storage server
gateway start object storage gateway
FLAGS:
--certs-dir value, -S value path to certs directory (default: "/root/.minio/certs")
--quiet disable startup information
--anonymous hide sensitive information from logging
--json output server logs and startup information in json format
--help, -h show help
--version, -v print the version
VERSION:
RELEASE.2022-01-08T03-11-54Z
When I write 127.0.0.1:9000 in the browser, I couldn't open the MinIo login page.
How can I fix my issue?
The MinIO documentation includes a MinIO Docker Quickstart Guide that has some recipes for starting the container. The important thing here is that you cannot just docker run minio/minio; it needs a command to run, probably server. This also needs to be translated into your Compose setup.
The first example on that page breaks down like so:
docker run \
-p 9000:9000 -p 9001:9001 \ # publish ports
-e "MINIO_ROOT_USER=..." \ # set environment variables
-e "MINIO_ROOT_PASSWORD=..." \
quay.io/minio/minio \ # image name
server /data --console-address ":9001" # command to run
That final command is important. In your example where you just docker run the image and get a help message, it's because you omitted the command. In the Compose setup you also don't have a command: line; if you look at docker-compose ps I expect you'll see the container is exited, and docker-compose logs minio will probably show the same help message.
You can include that command in your Compose setup with command::
version: '3.8'
services:
minio:
image: minio/minio:latest
environment:
MINIO_ROOT_USER: "..."
MINIO_ROOT_PASSWORD: "..."
volumes:
- ./data:/data
ports:
- 9000:9000
- 9001:9001
command: server /data --console-address :9001 # <-- add this
What I want is for this dockerfile to clone into the host machine (mine) and I'll copy it over as a volume, but instead it's cloning it directly into the container instead.
This is the dockerfile:
FROM php:7.4-apache
RUN docker-php-ext-install mysqli && docker-php-ext-enable mysqli
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get upgrade -y
RUN a2enmod ssl && a2enmod rewrite
RUN a2enmod include
# Install software
RUN apt-get install -y git
WORKDIR /
RUN git clone mygitrepo.git /test
I also have a different dockerfile that used to write to host, but doesn't anymore:
FROM nginx:1.19.1-alpine
RUN apk update && \
apk add --no-cache openssl && \
openssl req -x509 -nodes -days 365 \
-subj "/C=CA/ST=QC/O=Company Inc/CN=example.com" \
-newkey rsa:2048 -keyout /etc/ssl/private/nginx-selfsigned.key \
-out /etc/ssl/certs/nginx-selfsigned.crt;
I'm not sure where the root of the problem is. here's the docker-compose file that I use to start this:
version: '3.7'
services:
build:
build:
context: .
dockerfile: build.Dockerfile
networks:
- web
apache:
container_name: Apache
build:
context: ./apache
dockerfile: apache.Dockerfile
ports:
- '127.0.0.1:80:80'
- '127.0.0.1:443:443'
networks:
- web
networks:
web:
volumes:
dump:
I removed a lot of extra stuff so it may appear it doesn't start at all. The containers run fine. The servers run fine. I just want it to write to host and not just the container which is what it's doing. I'm having difficulty googling this.
I'm running a macos.
Thank you in advance! :D
You need to use volumes mapping to be able to do this. Edit your docker-compose file:
volumes:
-${PWD}/:/[container_dir]/
Once this volume mapping is done changes on the host machine is automatically written to the docker container. So, there is no need to think how you can write back from container to the host machine as this fits the purpose well.
I use nginx-proxy from jwilder and observe that the same letsencrypt certificates are repeatedly recreated. I am in the processed of debugging the servers and my guess is that if I start only a subset of the servers, the certificate for the ones not started are lost. When these are started later, the certificates are recreated with requests to letsencrypt. eventually I hit the rate limit. -- Another explanation could be that the cause may be that I removed and re-started the relevant container which keeps the certificates?
ACME server returned an error: urn:ietf:params:acme:error:rateLimited
:: There were too many requests of a given type :: Error creating new
order :: too many certificates already issued for exact set of
domains: caldav.gerastree.at: see
https://letsencrypt.org/docs/rate-limits/.
The limit is 5 per week.
What can be done to "reuse" certificates and not have new ones requested? When are certificates removed?
The docker-compse.yml file is from traskit, which is a multi-architecture version of jwilder:
version: '2'
services:
frontproxy:
image: traskit/nginx-proxy
container_name: frontproxy
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.docker_gen"
restart: always
environment:
DEFAULT_HOST: default.vhost
HSTS: "off"
ports:
- "80:80"
- "443:443"
volumes:
# - /home/frank/Data/htpasswd:/etc/nginx/htpasswd
- /var/run/docker.sock:/tmp/docker.sock:ro
- "certs-volume:/etc/nginx/certs:ro"
- "/etc/nginx/vhost.d"
- "/usr/share/nginx/html"
nginx-letsencrypt-companion:
restart: always
image: jrcs/letsencrypt-nginx-proxy-companion
volumes:
- "certs-volume:/etc/nginx/certs"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
volumes_from:
- "frontproxy"
volumes:
certs-volume:
For anyone finding this in the future: LE say that there's no way to clear the status of your domain-set once you've hit the rate-limit until the 7 day "sliding window" has elapsed, regardless of how you spell or arrange the domains in the certbot command.
However, if like me, you have a spare domain kicking around that you haven't yet added to the cert, add that to another -d flag and re-run the command. This worked for me.
Have the same issue, issuing certs within docker container when container starts. Seems like there is no way to resolve it. You can use stage server - but certs will not be authorized by CA.
So, yea, if its an option for you - you could have certbot running on host, and pass certs inside container.
I have a application that written with Java (spring-boot). When run it manually (with java -jar command) it's working fine without any problem.
But when use docker container (docker image built based alpine and use docker containers in docker swarm) it doesn't work and my app couldn't send request out and get error "SSL Handshake failure"
I checked it in --network host docker and had a same result. Also I built new docker image and imported cert file in java cacerts and /etc/ssl/certs in alpine but it did not work. In addition to when run manually my app I don't import any cert file in host.
Can anyone help in this case?
Thanks,
Hamid
Use network mode: bridge or host to fix this SSL error.
Still error on docker overlay network:
version: '3.7'
services:
httpstest:
hostname: httpstest
container_name: httpstest
image: httpstest-service:latest
environment:
- TZ=Asia/Ho_Chi_Minh
ports:
- "8288:8080"
networks:
default:
name: kaio_io
driver: bridge
# driver: overlay
# driver: host