I am attempting to deploy a traefik app via docker-machine to an EC2 instance using the following commands:
docker-machine scp include/traefik.toml swarm-master:/home/ubuntu/traefik.toml
docker-machine scp include/acme.json swarm-master:/home/ubuntu/acme.json
docker $(docker-machine config swarm-master) run \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $PWD/traefik.toml:/traefik.toml \
-v $PWD/acme.json:/acme.json \
-p 80:80 \
-p 443:443 \
-l traefik.frontend.rule=Host:traefik.domain.com \
-l traefik.port=8080 \
--network swarm-net \
--name traefik \
traefik:1.4.3-alpine \
-l DEBUG \
--docker
However, my application hangs after the following output:
traefik.toml 100% 503 29.4KB/s 00:00
acme.json 100% 0 0.0KB/s 00:00
time="2017-11-21T21:39:44Z" level=info msg="Traefik version v1.4.3 built on 2017-11-14_11:14:24AM"
time="2017-11-21T21:39:44Z" level=debug msg="Global configuration loaded {"GraceTimeOut":10000000000,"Debug":false,"CheckNewVersion":true,"AccessLogsFile":"","AccessLog":null,"TraefikLogsFile":"","LogLevel":"DEBUG","EntryPoints":{"http":{"Network":"","Address":":80","TLS":null,"Redirect":null,"Auth":null,"WhitelistSourceRange":null,"Compress":false,"ProxyProtocol":null,"ForwardedHeaders":{"Insecure":true,"TrustedIPs":null}}},"Cluster":null,"Constraints":[],"ACME":null,"DefaultEntryPoints":["http"],"ProvidersThrottleDuration":2000000000,"MaxIdleConnsPerHost":200,"IdleTimeout":0,"InsecureSkipVerify":false,"RootCAs":null,"Retry":null,"HealthCheck":{"Interval":30000000000},"RespondingTimeouts":null,"ForwardingTimeouts":null,"Docker":{"Watch":true,"Filename":"","Constraints":null,"Trace":false,"DebugLogGeneratedTemplate":false,"Endpoint":"unix:///var/run/docker.sock","Domain":"","TLS":null,"ExposedByDefault":true,"UseBindPortIP":false,"SwarmMode":false},"File":null,"Web":null,"Marathon":null,"Consul":null,"ConsulCatalog":null,"Etcd":null,"Zookeeper":null,"Boltdb":null,"Kubernetes":null,"Mesos":null,"Eureka":null,"ECS":null,"Rancher":null,"DynamoDB":null}"
time="2017-11-21T21:39:44Z" level=info msg="Preparing server http &{Network: Address::80 TLS:<nil> Redirect:<nil> Auth:<nil> WhitelistSourceRange:[] Compress:false ProxyProtocol:<nil> ForwardedHeaders:0xc42008bae0} with readTimeout=0s writeTimeout=0s idleTimeout=3m0s"
time="2017-11-21T21:39:44Z" level=info msg="Starting provider *docker.Provider {"Watch":true,"Filename":"","Constraints":null,"Trace":false,"DebugLogGeneratedTemplate":false,"Endpoint":"unix:///var/run/docker.sock","Domain":"","TLS":null,"ExposedByDefault":true,"UseBindPortIP":false,"SwarmMode":false}"
time="2017-11-21T21:39:44Z" level=info msg="Starting server on :80"
time="2017-11-21T21:39:44Z" level=debug msg="Provider connection established with docker 17.05.0-ce (API 1.29)"
time="2017-11-21T21:39:44Z" level=debug msg="Filtering container with empty frontend rule /swarm-agent"
time="2017-11-21T21:39:44Z" level=debug msg="Filtering container with empty frontend rule /swarm-agent-master"
time="2017-11-21T21:39:44Z" level=debug msg="Validation of load balancer method for backend backend-traefik failed: invalid load-balancing method ''. Using default method wrr."
time="2017-11-21T21:39:44Z" level=debug msg="Configuration received from provider docker: {"backends":{"backend-traefik":{"servers":{"server-traefik":{"url":"http://172.31.48.2:8080","weight":0}},"loadBalancer":{"method":"wrr"}}},"frontends":{"frontend-Host-traefik-domain-com-0":{"entryPoints":["http"],"backend":"backend-traefik","routes":{"route-frontend-Host-traefik-domain-com-0":{"rule":"Host:traefik.domain.com"}},"passHostHeader":true,"priority":0,"basicAuth":[],"headers":{}}}}"
time="2017-11-21T21:39:44Z" level=debug msg="Last docker config received more than 2s, OK"
time="2017-11-21T21:39:44Z" level=debug msg="Creating frontend frontend-Host-traefik-domain-com-0"
time="2017-11-21T21:39:44Z" level=debug msg="Wiring frontend frontend-Host-traefik-domain-com-0 to entryPoint http"
time="2017-11-21T21:39:44Z" level=debug msg="Creating route route-frontend-Host-traefik-domain-com-0 Host:traefik.domain.com"
time="2017-11-21T21:39:44Z" level=debug msg="Creating backend backend-traefik"
time="2017-11-21T21:39:44Z" level=debug msg="Creating load-balancer wrr"
time="2017-11-21T21:39:44Z" level=debug msg="Creating server server-traefik at http://172.31.48.2:8080 with weight 0"
time="2017-11-21T21:39:44Z" level=info msg="Server configuration reloaded on :80"
^Ctime="2017-11-21T21:39:59Z" level=info msg="I have to go... interrupt"
time="2017-11-21T21:39:59Z" level=info msg="Stopping server"
time="2017-11-21T21:39:59Z" level=debug msg="Waiting 10s seconds before killing connections on entrypoint http..."
time="2017-11-21T21:39:59Z" level=debug msg="Entrypoint http closed"
time="2017-11-21T21:39:59Z" level=info msg="Server stopped"
time="2017-11-21T21:39:59Z" level=info msg="Shutting down"
time="2017-11-21T21:39:59Z" level=error msg="Error creating server: http: Server closed"
traefik.toml:
debug = false
checkNewVersion = true
logLevel = "DEBUG"
defaultEntryPoints = ["http", "https"]
[web]
address = ":8080"
[web.auth.basic]
users = ["admin:alkdsjfalkdjflakdsjfalkdjfalkdjfaldkjf"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[acme]
email = "name#domain.com"
storage = "acme.json"
entryPoint = "https"
onHostRule = true
onDemand = false
and acme.json has permissions 600.
However, when I run the following (seemingly same) command after logging into my EC2 instance, traefik is successfully deployed:
docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $PWD/traefik.toml:/traefik.toml \
-v $PWD/acme.json:/acme.json \
-p 80:80 \
-p 443:443 \
-l traefik.frontend.rule=Host:traefik.domain.com \
-l traefik.port=8080 \
--network swarm-net \
--name traefik \
traefik:1.4.3-alpine \
-l DEBUG \
--docker
Any reason why I am not deploying with docker-machine are greatly appreciated. Thanks!
Related
I've spent days looking into this, most of the suggestions being just 'do a minikube delete' which doesn't help at all. I've tried reinstalling podman and minikube both wiping the config dirs as well. I can get this running just fine on a linux box, but unfortunately I have to get this working on macos as well.
My guess is that the ssh calls aren't being routed properly to the minikube container, and that has something to do with the 'no such network' message in the logs.
I'm hoping someone can point me in the right direction debug wise?
podman machine init (vars are derived from the system info calculated by a bash script)
podman machine init --cpu $CPU --memory $MEMORY
podman machine running:
podman-machine-default qemu 3 hours ago Currently running 14 17.18GB 10.74GB
minikube start command:
minikube start --cpus=4 --memory="8GB" --disk-size 100GB --wait=all --wait-timeout=30m0s --driver=podman --alsologtostderr
logs:
I0215 17:12:46.688961 7783 out.go:297] Setting OutFile to fd 1 ...
I0215 17:12:46.689155 7783 out.go:349] isatty.IsTerminal(1) = true
I0215 17:12:46.689163 7783 out.go:310] Setting ErrFile to fd 2...
I0215 17:12:46.689168 7783 out.go:349] isatty.IsTerminal(2) = true
I0215 17:12:46.689256 7783 root.go:313] Updating PATH: /Users/gdittri1/.minikube/bin
I0215 17:12:46.689626 7783 out.go:304] Setting JSON to false
I0215 17:12:46.735063 7783 start.go:112] hostinfo: {"hostname":"MGC12D93X6MD6R.fbpld77.ford.com","uptime":12801,"bootTime":1644950365,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.5.2","kernelVersion":"20.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"c10f3eab-7543-504c-9b8e-f4022d860bbe"}
W0215 17:12:46.735220 7783 start.go:120] gopshost.Virtualization returned error: not implemented yet
I0215 17:12:46.755846 7783 out.go:176] ๐ minikube v1.24.0 on Darwin 11.5.2
๐ minikube v1.24.0 on Darwin 11.5.2
I0215 17:12:46.756025 7783 notify.go:174] Checking for updates...
I0215 17:12:46.756376 7783 driver.go:343] Setting default libvirt URI to qemu:///system
I0215 17:12:47.005804 7783 podman.go:121] podman version: 3.4.4
I0215 17:12:47.025516 7783 out.go:176] โจ Using the podman (experimental) driver based on user configuration
โจ Using the podman (experimental) driver based on user configuration
I0215 17:12:47.025541 7783 start.go:280] selected driver: podman
I0215 17:12:47.025550 7783 start.go:762] validating driver "podman" against <nil>
I0215 17:12:47.025570 7783 start.go:773] status for podman: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0215 17:12:47.025739 7783 cli_runner.go:115] Run: podman system info --format json
I0215 17:12:47.230347 7783 info.go:285] podman info: {Host:{BuildahVersion:1.23.1 CgroupVersion:v2 Conmon:{Package:conmon-2.1.0-2.fc35.x86_64 Path:/usr/bin/conmon Version:conmon version 2.1.0, commit: } Distribution:{Distribution:fedora Version:35} MemFree:13495173120 MemTotal:16765710336 OCIRuntime:{Name:crun Package:crun-1.4.2-1.fc35.x86_64 Path:/usr/bin/crun Version:crun version 1.4.2
commit: f6fbc8f840df1a414f31a60953ae514fa497c748
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL} SwapFree:0 SwapTotal:0 Arch:amd64 Cpus:14 Eventlogger:journald Hostname:localhost.localdomain Kernel:5.15.18-200.fc35.x86_64 Os:linux Rootless:false Uptime:3h 6m 45.7s (Approximately 0.12 days)} Registries:{Search:[docker.io]} Store:{ConfigFile:/etc/containers/storage.conf ContainerStore:{Number:0} GraphDriverName:overlay GraphOptions:{} GraphRoot:/var/lib/containers/storage GraphStatus:{BackingFilesystem:xfs NativeOverlayDiff:false SupportsDType:true UsingMetacopy:true} ImageStore:{Number:1} RunRoot:/run/containers/storage VolumePath:/var/lib/containers/storage/volumes}}
I0215 17:12:47.233207 7783 start_flags.go:268] no existing cluster config was found, will generate one from the flags
I0215 17:12:47.233404 7783 start_flags.go:754] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0215 17:12:47.233453 7783 cni.go:93] Creating CNI manager for ""
I0215 17:12:47.233460 7783 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0215 17:12:47.233473 7783 start_flags.go:282] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28#sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:8192 CPUs:4 DiskSize:102400 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:30m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
I0215 17:12:47.270914 7783 out.go:176] ๐ Starting control plane node minikube in cluster minikube
๐ Starting control plane node minikube in cluster minikube
I0215 17:12:47.270987 7783 cache.go:118] Beginning downloading kic base image for podman with docker
I0215 17:12:47.289875 7783 out.go:176] ๐ Pulling base image ...
๐ Pulling base image ...
I0215 17:12:47.290010 7783 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
I0215 17:12:47.290077 7783 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.28#sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c to local cache
I0215 17:12:47.290131 7783 preload.go:148] Found local preload: /Users/gdittri1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v13-v1.22.3-docker-overlay2-amd64.tar.lz4
I0215 17:12:47.290165 7783 cache.go:57] Caching tarball of preloaded images
I0215 17:12:47.290352 7783 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28#sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local cache directory
I0215 17:12:47.290400 7783 preload.go:174] Found /Users/gdittri1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v13-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0215 17:12:47.290402 7783 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.28#sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local cache directory, skipping pull
I0215 17:12:47.290439 7783 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.28#sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in cache, skipping pull
I0215 17:12:47.290454 7783 cache.go:60] Finished verifying existence of preloaded tar for v1.22.3 on docker
I0215 17:12:47.290477 7783 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.28#sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c as a tarball
I0215 17:12:47.290917 7783 profile.go:147] Saving config to /Users/gdittri1/.minikube/profiles/minikube/config.json ...
I0215 17:12:47.290947 7783 lock.go:35] WriteFile acquiring /Users/gdittri1/.minikube/profiles/minikube/config.json: {Name:mk1b4df0865662d5873b3763b81d8aab8be0eabe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
E0215 17:12:47.291553 7783 cache.go:201] Error downloading kic artifacts: not yet implemented, see issue #8426
I0215 17:12:47.291562 7783 cache.go:206] Successfully downloaded all kic artifacts
I0215 17:12:47.291589 7783 start.go:313] acquiring machines lock for minikube: {Name:mk96344c390830e0b2d6941cbae4706a1f8bae17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0215 17:12:47.291644 7783 start.go:317] acquired machines lock for "minikube" in 47.398ยตs
I0215 17:12:47.291697 7783 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28#sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:8192 CPUs:4 DiskSize:102400 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:30m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
I0215 17:12:47.291832 7783 start.go:126] createHost starting for "" (driver="podman")
I0215 17:12:47.329349 7783 out.go:203] ๐ฅ Creating podman container (CPUs=4, Memory=8192MB) ...
๐ฅ Creating podman container (CPUs=4, Memory=8192MB) ...| I0215 17:12:47.329683 7783 start.go:160] libmachine.API.Create for "minikube" (driver="podman")
I0215 17:12:47.329733 7783 client.go:168] LocalClient.Create starting
I0215 17:12:47.329901 7783 main.go:130] libmachine: Reading certificate data from /Users/gdittri1/.minikube/certs/ca.pem
I0215 17:12:47.329971 7783 main.go:130] libmachine: Decoding PEM data...
I0215 17:12:47.329990 7783 main.go:130] libmachine: Parsing certificate...
I0215 17:12:47.330085 7783 main.go:130] libmachine: Reading certificate data from /Users/gdittri1/.minikube/certs/cert.pem
I0215 17:12:47.330140 7783 main.go:130] libmachine: Decoding PEM data...
I0215 17:12:47.330151 7783 main.go:130] libmachine: Parsing certificate...
I0215 17:12:47.331042 7783 cli_runner.go:115] Run: podman network inspect minikube --format "{{range .plugins}}{{if eq .type "bridge"}}{{(index (index .ipam.ranges 0) 0).subnet}},{{(index (index .ipam.ranges 0) 0).gateway}}{{end}}{{end}}"
/ W0215 17:12:47.500272 7783 cli_runner.go:162] podman network inspect minikube --format "{{range .plugins}}{{if eq .type "bridge"}}{{(index (index .ipam.ranges 0) 0).subnet}},{{(index (index .ipam.ranges 0) 0).gateway}}{{end}}{{end}}" returned with exit code 125
I0215 17:12:47.500405 7783 network_create.go:254] running [podman network inspect minikube] to gather additional debugging logs...
I0215 17:12:47.500430 7783 cli_runner.go:115] Run: podman network inspect minikube
\ W0215 17:12:47.669206 7783 cli_runner.go:162] podman network inspect minikube returned with exit code 125
I0215 17:12:47.669240 7783 network_create.go:257] error running [podman network inspect minikube]: podman network inspect minikube: exit status 125
stdout:
[]
stderr:
Error: error inspecting object: no such network "minikube"
I0215 17:12:47.669266 7783 network_create.go:259] output of [podman network inspect minikube]: -- stdout --
[]
-- /stdout --
** stderr **
Error: error inspecting object: no such network "minikube"
** /stderr **
I0215 17:12:47.669341 7783 cli_runner.go:115] Run: podman network inspect podman --format "{{range .plugins}}{{if eq .type "bridge"}}{{(index (index .ipam.ranges 0) 0).subnet}},{{(index (index .ipam.ranges 0) 0).gateway}}{{end}}{{end}}"
/ I0215 17:12:47.841466 7783 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00000e0a8] misses:0}
I0215 17:12:47.841512 7783 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0215 17:12:47.841528 7783 network_create.go:106] attempt to create podman network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 0 ...
I0215 17:12:47.841610 7783 cli_runner.go:115] Run: podman network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 --label=created_by.minikube.sigs.k8s.io=true minikube
- I0215 17:12:48.012067 7783 network_create.go:90] podman network minikube 192.168.49.0/24 created
I0215 17:12:48.012106 7783 kic.go:106] calculated static IP "192.168.49.2" for the "minikube" container
I0215 17:12:48.012272 7783 cli_runner.go:115] Run: podman ps -a --format {{.Names}}
| I0215 17:12:48.182356 7783 cli_runner.go:115] Run: podman volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
- I0215 17:12:48.353886 7783 oci.go:102] Successfully created a podman volume minikube
I0215 17:12:48.354001 7783 cli_runner.go:115] Run: podman run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.28 -d /var/lib
- I0215 17:12:49.614378 7783 cli_runner.go:168] Completed: podman run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.28 -d /var/lib: (1.260310912s)
I0215 17:12:49.614408 7783 oci.go:106] Successfully prepared a podman volume minikube
I0215 17:12:49.614458 7783 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
I0215 17:12:49.614474 7783 kic.go:179] Starting extracting preloaded images to volume ...
I0215 17:12:49.614505 7783 cli_runner.go:115] Run: podman info --format "'{{json .SecurityOptions}}'"
I0215 17:12:49.614603 7783 cli_runner.go:115] Run: podman run --rm --entrypoint /usr/bin/tar -v /Users/gdittri1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v13-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28 -I lz4 -xf /preloaded.tar -C /extractDir
/ W0215 17:12:49.895058 7783 cli_runner.go:162] podman info --format "'{{json .SecurityOptions}}'" returned with exit code 125
I0215 17:12:49.895243 7783 cli_runner.go:115] Run: podman run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var:exec --memory-swap=8192mb --memory=8192mb --cpus=4 -e container=podman --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.28
W0215 17:12:49.937799 7783 cli_runner.go:162] podman run --rm --entrypoint /usr/bin/tar -v /Users/gdittri1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v13-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28 -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
I0215 17:12:49.937856 7783 kic.go:186] Unable to extract preloaded tarball to volume: podman run --rm --entrypoint /usr/bin/tar -v /Users/gdittri1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v13-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28 -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
stdout:
stderr:
Error: statfs /Users/gdittri1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v13-v1.22.3-docker-overlay2-amd64.tar.lz4: no such file or directory
| I0215 17:12:50.626660 7783 cli_runner.go:115] Run: podman container inspect minikube --format={{.State.Running}}
- I0215 17:12:50.908780 7783 cli_runner.go:115] Run: podman container inspect minikube --format={{.State.Status}}
/ I0215 17:12:51.156225 7783 cli_runner.go:115] Run: podman exec minikube stat /var/lib/dpkg/alternatives/iptables
| I0215 17:12:51.543705 7783 oci.go:281] the created container "minikube" has a running status.
I0215 17:12:51.543741 7783 kic.go:210] Creating ssh key for kic: /Users/gdittri1/.minikube/machines/minikube/id_rsa...
- I0215 17:12:51.736702 7783 kic_runner.go:187] podman (temp): /Users/gdittri1/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0215 17:12:51.740282 7783 kic_runner.go:272] Run: /usr/local/bin/podman exec -i minikube tee /home/docker/.ssh/authorized_keys
- I0215 17:12:52.133720 7783 cli_runner.go:115] Run: podman container inspect minikube --format={{.State.Status}}
| I0215 17:12:52.356034 7783 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0215 17:12:52.356101 7783 kic_runner.go:114] Args: [podman exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
\ I0215 17:12:52.638946 7783 cli_runner.go:115] Run: podman container inspect minikube --format={{.State.Status}}
/ I0215 17:12:52.833654 7783 machine.go:88] provisioning docker machine ...
I0215 17:12:52.833734 7783 ubuntu.go:169] provisioning hostname "minikube"
I0215 17:12:52.833863 7783 cli_runner.go:115] Run: podman version --format {{.Version}}
| I0215 17:12:53.093665 7783 cli_runner.go:115] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
/ I0215 17:12:53.290195 7783 main.go:130] libmachine: Using SSH client type: native
I0215 17:12:53.290458 7783 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1396ec0] 0x1399fa0 <nil> [] 0s} 127.0.0.1 36327 <nil> <nil>}
I0215 17:12:53.290476 7783 main.go:130] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0215 17:12:53.291434 7783 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58149->127.0.0.1:36327: read: connection reset by peer
\ I0215 17:12:56.293911 7783 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58150->127.0.0.1:36327: read: connection reset by peer
| I0215 17:12:59.296215 7783 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58151->127.0.0.1:36327: read: connection reset by peer
/ I0215 17:13:02.300200 7783 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58152->127.0.0.1:36327: read: connection reset by peer
- I0215 17:13:05.304566 7783 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58153->127.0.0.1:36327: read: connection reset by peer
| I0215 17:13:08.307108 7783 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58154->127.0.0.1:36327: read: connection reset by peer
/ I0215 17:13:11.309256 7783 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58155->127.0.0.1:36327: read: connection reset by peer
- I0215 17:13:14.312617 7783 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58156->127.0.0.1:36327: read: connection reset by peer
\ I0215 17:13:17.317423 7783 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58157->127.0.0.1:36327: read: connection reset by peer
/ I0215 17:13:20.321361 7783 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58158->127.0.0.1:36327: read: connection reset by peer
- I0215 17:13:23.327387 7783 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58159->127.0.0.1:36327: read: connection reset by peer
\ I0215 17:13:26.331340 7783 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58160->127.0.0.1:36327: read: connection reset by peer
/ I0215 17:13:29.333104 7783 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58161->127.0.0.1:36327: read: connection reset by peer
- I0215 17:13:32.337739 7783 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58162->127.0.0.1:36327: read: connection reset by peer
\ I0215 17:13:35.340744 7783 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58163->127.0.0.1:36327: read: connection reset by peer
| I0215 17:13:38.343365 7783 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58164->127.0.0.1:36327: read: connection reset by peer
- I0215 17:13:41.347142 7783 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58165->127.0.0.1:36327: read: connection reset by peer
\ I0215 17:13:44.350661 7783 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58166->127.0.0.1:36327: read: connection reset by peer
| I0215 17:13:47.353094 7783 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58167->127.0.0.1:36327: read: connection reset by peer
- I0215 17:13:50.354277 7783 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58168->127.0.0.1:36327: read: connection reset by peer
\ I0215 17:13:53.357716 7783 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58169->127.0.0.1:36327: read: connection reset by peer
| I0215 17:13:56.361216 7783 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58170->127.0.0.1:36327: read: connection reset by peer
/ I0215 17:13:59.365807 7783 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58171->127.0.0.1:36327: read: connection reset by peer
\ I0215 17:14:02.367380 7783 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58172->127.0.0.1:36327: read: connection reset by peer
| I0215 17:14:05.373500 7783 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58173->127.0.0.1:36327: read: connection reset by peer
/ I0215 17:14:08.376993 7783 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58174->127.0.0.1:36327: read: connection reset by peer
\ I0215 17:14:11.381381 7783 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58175->127.0.0.1:36327: read: connection reset by peer
| I
Edit #1: Looks like if i manually try to create the minikube network
podman network create minikube it fails with
Resolving "network" using unqualified-search registries (/etc/containers/registries.conf.d/999-podman-machine.conf)
Trying to pull docker.io/library/network:latest...
Error: initializing source docker://network:latest: reading manifest latest in docker.io/library/network: errors:
denied: requested access to the resource is denied
unauthorized: authentication required
I'm not sure why this is, I've verified i'm logged into dockerhub I can pull my private images from there just fine.
I was able to start minikube using podman with the following steps:
podman machine init --cpus 2 --memory 2048 --disk-size 20 --image-path next
podman machine start
podman system connection default podman-machine-default-root
minikube start --driver=podman --listen-address=192.168.127.2
I'm trying to set up a wildcard certificate mechanism with traefik v2.2 and GoDaddy. What I want to do is generating a valid certificate for the URLs pattern *.example.org. Here there is my docker-compose:
version: '3.7'
services:
traefik:
image: traefik:v2.2
container_name: traefik
restart: always
env_file:
- .provider.env
# .provider.env contains `GODADDY_API_KEY` and `GODADDY_API_SECRET`
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./tls-certificates:/tls-certificates
ports:
# http
- 8080:80
# https
- 443:443
command:
- --api.dashboard=true
- --providers.docker=true
- --providers.docker.exposedbydefault=false
- --providers.docker.network=proxy
- --entrypoints.webinsecure.address=:80
- --entrypoints.websecure.address=:443
# --certificatesresolvers.<name> Certificates resolvers configuration
# ACME V2 supports wildcard certificates.
# Wildcard certificates can only be generated through a DNS-01 challenge.
- --certificatesresolvers.wildcard-godaddy.acme.tlschallenge=true
- --certificatesResolvers.wildcard-godaddy.acme.dnsChallenge.provider=godaddy
- --certificatesResolvers.wildcard-godaddy.acme.dnsChallenge.delayBeforeCheck=0
# Email address used for registration.
- --certificatesresolvers.wildcard-godaddy.acme.email=foo#example.org
# Certificates storage
- --certificatesresolvers.wildcard-godaddy.acme.storage=/tls-certificates/acme.json
networks:
- proxy
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik.entrypoints=webinsecure"
- "traefik.http.routers.traefik.rule=Host(`traefik.example.org`)"
- "traefik.http.middlewares.traefik-auth.basicauth.users=${DASHBOARD_USERNAME}:${DASHBOARD_PASSWORD}"
- "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https"
- "traefik.http.routers.traefik.middlewares=traefik-https-redirect"
- "traefik.http.routers.traefik-secure.entrypoints=websecure"
- "traefik.http.routers.traefik-secure.rule=Host(`traefik.example.org`)"
- "traefik.http.routers.traefik-secure.middlewares=traefik-auth"
- "traefik.http.routers.traefik-secure.tls=true"
- "traefik.http.routers.traefik-secure.tls.certresolver=wildcard-godaddy"
- "traefik.http.routers.traefik-secure.tls.domains[0].main=example.org"
- "traefik.http.routers.traefik-secure.tls.domains[0].sans=*.example.org"
- "traefik.http.routers.traefik-secure.service=api#internal"
networks:
proxy:
external: true
In my dns I have an A record * pointing to my ip address.
However when I start the compose I get the following error:
traefik | time="2020-04-15T16:40:50Z" level=debug msg="No default certificate, generating one"
traefik | time="2020-04-15T16:40:51Z" level=debug msg="Looking for provided certificate(s) to validate [\"example.org\" \"*.example.org\"]..." providerName=wildcard-godaddy.acme
traefik | time="2020-04-15T16:40:51Z" level=debug msg="Domains [\"example.org\" \"*.example.org\"] need ACME certificates generation for domains \"example.org,*.example.org\"." providerName=wildcard-godaddy.acme
traefik | time="2020-04-15T16:40:51Z" level=debug msg="Loading ACME certificates [example.org *.example.org]..." providerName=wildcard-godaddy.acme
traefik | time="2020-04-15T16:40:51Z" level=debug msg="Building ACME client..." providerName=wildcard-godaddy.acme
traefik | time="2020-04-15T16:40:51Z" level=debug msg="https://acme-v02.api.letsencrypt.org/directory" providerName=wildcard-godaddy.acme
traefik | time="2020-04-15T16:40:51Z" level=debug msg="Using DNS Challenge provider: godaddy" providerName=wildcard-godaddy.acme
traefik | time="2020-04-15T16:40:51Z" level=debug msg="Using TLS Challenge provider." providerName=wildcard-godaddy.acme
traefik | time="2020-04-15T16:40:51Z" level=debug msg="legolog: [INFO] [example.org, *.example.org] acme: Obtaining bundled SAN certificate"
traefik | time="2020-04-15T16:40:52Z" level=debug msg="legolog: [INFO] [*.example.org] AuthURL: https://acme-v02.api.letsencrypt.org/acme/authz-v3/id1"
traefik | time="2020-04-15T16:40:52Z" level=debug msg="legolog: [INFO] [example.org] AuthURL: https://acme-v02.api.letsencrypt.org/acme/authz-v3/id2"
traefik | time="2020-04-15T16:40:52Z" level=debug msg="legolog: [INFO] [*.example.org] acme: use dns-01 solver"
traefik | time="2020-04-15T16:40:52Z" level=debug msg="legolog: [INFO] [example.org] acme: use tls-alpn-01 solver"
traefik | time="2020-04-15T16:40:52Z" level=debug msg="TLS Challenge Present temp certificate for example.org" providerName=acme
traefik | time="2020-04-15T16:40:52Z" level=debug msg="legolog: [INFO] [example.org] acme: Trying to solve TLS-ALPN-01"
traefik | time="2020-04-15T16:40:58Z" level=debug msg="TLS Challenge CleanUp temp certificate for example.org" providerName=acme
traefik | time="2020-04-15T16:40:58Z" level=debug msg="legolog: [INFO] [*.example.org] acme: Preparing to solve DNS-01"
traefik | time="2020-04-15T16:40:58Z" level=debug msg="legolog: [INFO] [*.example.org] acme: Trying to solve DNS-01"
traefik | time="2020-04-15T16:40:58Z" level=debug msg="legolog: [INFO] [*.example.org] acme: Checking DNS record propagation using [127.0.0.11:53]"
traefik | time="2020-04-15T16:40:58Z" level=debug msg="legolog: [INFO] Wait for propagation [timeout: 2m0s, interval: 2s]"
traefik | time="2020-04-15T16:40:58Z" level=debug msg="legolog: [INFO] [*.example.org] acme: Waiting for DNS record propagation."
traefik | time="2020-04-15T16:41:00Z" level=debug msg="legolog: [INFO] [*.example.org] acme: Waiting for DNS record propagation."
traefik | time="2020-04-15T16:41:02Z" level=debug msg="legolog: [INFO] [*.example.org] acme: Waiting for DNS record propagation."
traefik | time="2020-04-15T16:41:04Z" level=debug msg="legolog: [INFO] [*.example.org] acme: Waiting for DNS record propagation."
traefik | time="2020-04-15T16:41:06Z" level=debug msg="legolog: [INFO] [*.example.org] acme: Waiting for DNS record propagation."
traefik | time="2020-04-15T16:41:08Z" level=debug msg="legolog: [INFO] [*.example.org] acme: Waiting for DNS record propagation."
traefik | time="2020-04-15T16:41:10Z" level=debug msg="legolog: [INFO] [*.example.org] acme: Waiting for DNS record propagation."
traefik | time="2020-04-15T16:41:12Z" level=debug msg="legolog: [INFO] [*.example.org] acme: Waiting for DNS record propagation."
traefik | time="2020-04-15T16:41:14Z" level=debug msg="legolog: [INFO] [*.example.org] acme: Waiting for DNS record propagation."
traefik | time="2020-04-15T16:41:21Z" level=debug msg="legolog: [INFO] [*.example.org] acme: Cleaning DNS-01 challenge"
traefik | time="2020-04-15T16:41:22Z" level=debug msg="legolog: [INFO] Deactivating auth: https://acme-v02.api.letsencrypt.org/acme/authz-v3/id1"
traefik | time="2020-04-15T16:41:22Z" level=debug msg="legolog: [INFO] Unable to deactivate the authorization: https://acme-v02.api.letsencrypt.org/acme/authz-v3/id1"
traefik | time="2020-04-15T16:41:22Z" level=debug msg="legolog: [INFO] Deactivating auth: https://acme-v02.api.letsencrypt.org/acme/authz-v3/id2"
traefik | time="2020-04-15T16:41:22Z" level=debug msg="legolog: [INFO] Unable to deactivate the authorization: https://acme-v02.api.letsencrypt.org/acme/authz-v3/id2"
traefik | time="2020-04-15T16:41:22Z" level=error msg="Unable to obtain ACME certificate for domains \"example.org,*.example.org\" : unable to generate a certificate for the domains [example.org *.example.org]: acme: Error -> One or more domains had a problem:\n[*.example.org] acme: error: 403 :: urn:ietf:params:acme:error:unauthorized :: During secondary validation: Incorrect TXT record \"null\" found at _acme-challenge.example.org, url: \n[example.org] acme: error: 400 :: urn:ietf:params:acme:error:connection :: Connection refused, url: \n" providerName=wildcard-godaddy.acme
I do not understand either I'm misconfiguring something or if there's a problem on the let's encrypt/godaddy side.
Edit:
On port 80 I have another nginx instance up & running
Turned out it was a bug (fixed in v.2.2.1). See here: https://github.com/go-acme/lego/issues/1113
I am facing an issue on docker container. When I execute the docker-compose up to start the application the Postgres container is not starting.
Error which I get after docker-compose up
/usr/local/bundle/gems/activerecord-4.2.0/lib/active_record/connection_adapters/postgresql_adapter.rb:651:in `initialize': could not translate host name "db" to address: Name or service not known (PG::ConnectionBad)
Now it is frequently happing. I tried with few steps as add ports for db container i.e 5432:5432. I used to start-stop the specific db container so that the connection should get re-established but it is not working.
Application details:
Rails Version: 4.2.0 Ruby version: 2.2.0
docker-compose.yml
version: '3.7'
services:
selenium:
image: selenium/standalone-chrome-debug:3.141.59-krypton
ports: ['4444:4444', '5900:5900']
logging:
driver: none
redis:
image: redis:3.0.0
elastic:
image: elasticsearch:1.5.2
db:
image: postgres:9.3.10
volumes:
- ./tmp/db:/var/lib/postgresql/data
- .:/home
XYZ:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
stdin_open: true
tty: true
volumes:
- XYZ-sync:/home:nocopy
ports:
- "3000:3000"
depends_on:
- db
- redis
- elastic
- selenium
environment:
- REDIS_URL=redis://redis:6379/0
- ELASTICSEARCH_URL=elastic://elastic:9200/0
- SELENIUM_HOST=selenium
- SELENIUM_PORT=4444
- TEST_APP_HOST=XYZ
- TEST_PORT=3000
db log
db_1 | LOG: database system was shut down at 2019-09-10 07:37:08 UTC
db_1 | LOG: MultiXact member wraparound protections are now enabled
db_1 | LOG: database system is ready to accept connections
db_1 | LOG: autovacuum launcher started
db_1 | LOG: received smart shutdown request
db_1 | LOG: autovacuum launcher shutting down
db_1 | LOG: shutting down
db_1 | LOG: database system is shut down
db_1 | LOG: database system was shut down at 2019-09-10 07:37:50 UTC
db_1 | LOG: MultiXact member wraparound protections are now enabled
db_1 | LOG: database system is ready to accept connections
db_1 | LOG: autovacuum launcher started
db_1 | LOG: database system was interrupted; last known up at 2019-09-10 07:38:31 UTC
db_1 | LOG: received smart shutdown request
db_1 | LOG: database system was interrupted; last known up at 2019-09-10 07:38:31 UTC
db_1 | LOG: database system was not properly shut down; automatic recovery in progress
db_1 | LOG: record with zero length at 0/1D8F0120
db_1 | LOG: redo is not required
db_1 | LOG: MultiXact member wraparound protections are now enabled
db_1 | LOG: autovacuum launcher started
db_1 | LOG: database system is ready to accept connections
db_1 | LOG: stats_timestamp 2019-09-10 08:02:39.288642+00 is later than collector's time 2019-09-10 08:02:39.189551+00 for database 0
db_1 | LOG: database system was interrupted; last known up at 2019-09-10 08:18:02 UTC
db_1 | FATAL: the database system is starting up
docker-compose ps output
xyz_db_1 /docker-entrypoint.sh postgres Up 5432/tcp
xyz_elastic_1 /docker-entrypoint.sh elas ... Up 9200/tcp, 9300/tcp
xyz_xyz_1 bash -c rm -f tmp/pids/ser ... Exit 1
xyz_redis_1 /entrypoint.sh redis-server Up 6379/tcp
xyz_selenium_1 /opt/bin/entry_point.sh Up 0.0.0.0:4444->4444/tcp, 0.0.0.0:5900->5900/tcp
database.yml
default: &default
adapter: postgresql
encoding: unicode
pool: 5
username: postgres
password:
host: db
development:
<<: *default
database: XYZ_development
test:
<<: *default
database: XYZ_test
development_migrate:
adapter: mysql2
encoding: utf8
database: xyz_ee
username: root
password:
host: localhost
pool: 5
Any help will be appreciated.
I resolved my issue with the help of #jayDosrsey suggestion.
The DB container get started before the main web container and hence it always gets failed to start the container and I again need to restart the web container.
Resolved this issue by adding the health check condition while starting the rails server.
XYZ:
build: .
command: bash -c "while !</dev/tcp/db/5432; do sleep 1; done; rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
...
Now I am able to start the container in sequence.
I have a set of meteor apps running as a docker stack along with traefik proxy, mongo and an http server. I had to do some redirection to pass traefik to each individual app so the client requests can be handled properly in response to the meteor ROOT URL. I do not understand the traefik log output that is telling me 'Unable to obtain ACME certificate for domains .... ' because of '... detected thanks to rule \"Host:myhost.mydomain.com;PathPrefix:/app2{id:[0-9]?}\"' Can someone please help me understand this log output? I am including the sanitized debug log, and sanitized traefik.toml and docker-compose.yml files. I don't think this is a bug, it is probably a misconfiguration.
I cannot use DNS challenge because I do not have control over the dns server. I have tried several configuration options. I suspect it has to do with the PathPrefix in the Host rule but don't think I understand enough about ACME to know how to properly change it.
Traefik.toml
logLevel = "DEBUG"
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.dashboard]
address = ":8090"
[entryPoints.dashboard.auth]
[entryPoints.dashboard.auth.basic]
users = ["admin:$2y$05$rd9MRJG/w0ugxIzmYy3L8.WpRheZfzPTTm17y.zq3cHKtZvMQ4OdW"]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[entryPoints.https.redirect]
regex = "^(https://ip-205-156-8-94.ec2.internal)/?$"
replacement = "$1/"
permanent = true
[api]
entrypoint="dashboard"
[acme]
caServer = "https://acme-v02.api.letsencrypt.org/directory"
email = "myemail#mydomain.com"
storage = "acme.json"
OnHostRule = true
entryPoint = "https"
[acme.tlsChallenge]
[docker]
domain = "myhost.mydomain.com"
watch = true
network = "web"
exposedbydefualt = false
[traefikLog]
filePath = "/logs/traefik.log"
[accessLog]
filePath = "/logs/access.log"
***** docker-compose ****
version: "3.2"
networks:
web:
external: true
backend:
external: false
services:
traefik:
image: traefik
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
networks:
- web
ports:
- "443:443"
- "80:80"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /home/myhome/container_deployment/traefik.toml:/traefik.toml
- /home/myhome/container_deployment/logs:/logs
- /home/myhome/container_deployment/acme.json:/acme.json
labels:
- traefik.frontend.rule=Host:myhost.mydomain.com;PathPrefixStrip:/proxy
- traefik.port=8090
mats-http:
image: myapps/production:mats-http
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 20s
volumes:
- /home/myhome/container_deployment/web:/web
labels:
- traefik.backend=mats-http/index.html
- traefik.frontend.rule=Host:myhost#mydomain.com;PathPrefixStrip:/
- traefik.docker.network=web
- traefik.port=8080
networks:
- web
mongo:
image: mongo
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 30s
command: -nojournal
ports:
- "27017:27017"
volumes:
- /home/myhome/mongodata:/data/db
networks:
- backend
- web
app1:
image: myapps/production:app1-2.2.0
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 60s
environment:
- DELAY=6
- ROOT_URL=https://myhost.mydomain.com/app1
volumes:
- /home/myhome/container_deployment/settings:/usr/app/settings
depends_on:
- mongo
labels:
- traefik.backend=app1
- traefik.frontend.rule=Host:myhost.mydomain.com;PathPrefix:/app1{id:[0-9]?}
- traefik.docker.network=web
- traefik.port=80
networks:
- web
- backend
app2:
image: myapps/production:app2-2.2.0
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 60s
environment:
- DELAY=6
- ROOT_URL=https://myhome.mydomain.com/app2
volumes:
- /home/myhome/container_deployment/settings:/usr/app/settings
depends_on:
- mongo
labels:
- traefik.backend=app2
- traefik.frontend.rule=Host:myhome.mydomain.com;PathPrefix:/app2{id:[0-9]?}
- traefik.docker.network=web
- traefik.port=80
networks:
- web
- backend
app3:
image: myapps/production:app3-2.2.0
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 60s
environment:
- DELAY=6
- ROOT_URL=https://myhome.mydomain.com/app3
volumes:
- /home/myhome/container_deployment/settings:/usr/app/settings
depends_on:
- mongo
labels:
- traefik.backend=app3
- traefik.frontend.rule=Host:myhome.mydomain.com;PathPrefix:/app3{id:[0-9]?}
- traefik.docker.network=web
- traefik.port=80
networks:
- web
- backend
***** truncated traefik debug log file *****
time="2019-07-11T16:03:38Z" level=info msg="Traefik version v1.7.12 built on 2019-05-29_07:35:02PM"
...
...
time="2019-07-11T16:03:38Z" level=debug msg="Configuration received from provider docker: {\"backends\":{\"backend-mats-http-index-html\":{\"servers\":{\"server-matsStack-mats-http-1-vpaeunxj6xif75dt61peb62an-695b347dcd588d1d0b320f01e5644738\":{\"url\":\"http://10.0.45.14:8080\",\"weight\":1}},\"loadBalancer\":{\"method\":\"wrr\"}},\"backend-matsStack-mongo-1-kjoekf19fyw5ru0fr1azazzu5\":{\"servers\":{\"server-matsStack-mongo-1-kjoekf19fyw5ru0fr1azazzu5-e29723ab5c75dde0eaf988caf77e50b2\":{\"url\":\"http://10.0.45.3:27017\",\"weight\":1}},\"loadBalancer\":{\"method\":\"wrr\"}},\"backend-matsStack-traefik-1-o1t2x6w1a0i3qu9nqwx6x67x1\":{\"servers\":{\"server-matsStack-traefik-1-o1t2x6w1a0i3qu9nqwx6x67x1-546c661a91789b6ce7fef697cc38e588\":{\"url\":\"http://10.0.45.12:8090\",\"weight\":1}},\"loadBalancer\":{\"method\":\"wrr\"}},\"backend-app3\":{\"servers\":{\"server-matsStack-app3-1-71g3c7hr2qz1frc5paqn1y52i-382f1bea7ec466d09871b7dff5c5a47c\":{\"url\":\"http://10.0.45.8:80\",\"weight\":1}},\"loadBalancer\":{\"method\":\"wrr\"}},\"backend-app2\":{\"servers\":{\"server-matsStack-app2-1-dvj9reft0nql50mp4jqxb9mx6-318c26e13ba26230fc29459a7f72c3aa\":{\"url\":\"http://10.0.45.10:80\",\"weight\":1}},\"loadBalancer\":{\"method\":\"wrr\"}},\"backend-app1\":{\"servers\":{\"server-matsStack-app1-1-nk6ax8rfo9d3tly953huzrvb0-cb78098740da4a0710dfc1b9067e7842\":{\"url\":\"http://10.0.45.6:80\",\"weight\":1}},\"loadBalancer\":{\"method\":\"wrr\"}}},\"frontends\":{\"frontend-Host-matsStack-mongo-1-kjoekf19fyw5ru0fr1azazzu5-myhost.mydomain.com-4\":{\"entryPoints\":[\"http\",\"https\"],\"backend\":\"backend-matsStack-mongo-1-kjoekf19fyw5ru0fr1azazzu5\",\"routes\":{\"route-frontend-Host-matsStack-mongo-1-kjoekf19fyw5ru0fr1azazzu5-myhost.mydomain.com-4\":{\"rule\":\"Host:matsStack-mongo.1.kjoekf19fyw5ru0fr1azazzu5.myhost.mydomain.com\"}},\"passHostHeader\":true,\"priority\":0,\"basicAuth\":null},\"frontend-Host-myhost.mydomain.com-PathPrefix-app3-id-0-9-3\":{\"entryPoints\":[\"http\",\"https\"],\"backend\":\"backend-app3\",\"routes\":{\"route-frontend-Host-myhost.mydomain.com-PathPrefix-app3-id-0-9-3\":{\"rule\":\"Host:myhost.mydomain.com;PathPrefix:/app3{id:[0-9]?}\"}},\"passHostHeader\":true,\"priority\":0,\"basicAuth\":null},\"frontend-Host-myhost.mydomain.com-PathPrefix-app2-id-0-9-2\":{\"entryPoints\":[\"http\",\"https\"],\"backend\":\"backend-app2\",\"routes\":{\"route-frontend-Host-myhost.mydomain.com-PathPrefix-app2-id-0-9-2\":{\"rule\":\"Host:myhost.mydomain.com;PathPrefix:/app2{id:[0-9]?}\"}},\"passHostHeader\":true,\"priority\":0,\"basicAuth\":null},\"frontend-Host-myhost.mydomain.com-PathPrefix-app1-id-0-9-5\":{\"entryPoints\":[\"http\",\"https\"],\"backend\":\"backend-app1\",\"routes\":{\"route-frontend-Host-myhost.mydomain.com-PathPrefix-app1-id-0-9-5\":{\"rule\":\"Host:myhost.mydomain.com;PathPrefix:/app1{id:[0-9]?}\"}},\"passHostHeader\":true,\"priority\":0,\"basicAuth\":null},\"frontend-Host-myhost.mydomain.com-PathPrefixStrip-0\":{\"entryPoints\":[\"http\",\"https\"],\"backend\":\"backend-mats-http-index-html\",\"routes\":{\"route-frontend-Host-myhost.mydomain.com-PathPrefixStrip-0\":{\"rule\":\"Host:myhost.mydomain.com;PathPrefixStrip:/\"}},\"passHostHeader\":true,\"priority\":0,\"basicAuth\":null},\"frontend-Host-myhost.mydomain.com-PathPrefixStrip-proxy-1\":{\"entryPoints\":[\"http\",\"https\"],\"backend\":\"backend-matsStack-traefik-1-o1t2x6w1a0i3qu9nqwx6x67x1\",\"routes\":{\"route-frontend-Host-myhost.mydomain.com-PathPrefixStrip-proxy-1\":{\"rule\":\"Host:myhost.mydomain.com;PathPrefixStrip:/proxy\"}},\"passHostHeader\":true,\"priority\":0,\"basicAuth\":null}}}"
time="2019-07-11T16:03:39Z" level=info msg="Server configuration reloaded on :80"
time="2019-07-11T16:03:39Z" level=info msg="Server configuration reloaded on :80"
...
...
time="2019-07-11T16:03:39Z" level=info msg="Server configuration reloaded on :443"
time="2019-07-11T16:03:39Z" level=info msg="Server configuration reloaded on :8090"
time="2019-07-11T16:03:39Z" level=debug msg="Try to challenge certificate for domain [myhost.mydomain.com] founded in Host rule"
time="2019-07-11T16:03:39Z" level=debug msg="Try to challenge certificate for domain [myhost.mydomain.com] founded in Host rule"
time="2019-07-11T16:03:39Z" level=debug msg="Try to challenge certificate for domain [myhost.mydomain.com] founded in Host rule"
time="2019-07-11T16:03:39Z" level=debug msg="Try to challenge certificate for domain [matsstack-mongo.1.kjoekf19fyw5ru0fr1azazzu5.myhost.mydomain.com] founded in Host rule"
time="2019-07-11T16:03:39Z" level=debug msg="Try to challenge certificate for domain [myhost.mydomain.com] founded in Host rule"
time="2019-07-11T16:03:39Z" level=debug msg="Try to challenge certificate for domain [myhost.mydomain.com] founded in Host rule"
time="2019-07-11T16:03:39Z" level=debug msg="Looking for provided certificate(s) to validate [\"myhost.mydomain.com\"]..."
time="2019-07-11T16:03:39Z" level=debug msg="Looking for provided certificate(s) to validate [\"myhost.mydomain.com\"]..."
time="2019-07-11T16:03:39Z" level=debug msg="Domains [\"myhost.mydomain.com\"] need ACME certificates generation for domains \"myhost.mydomain.com\"."
time="2019-07-11T16:03:39Z" level=debug msg="Domains [\"myhost.mydomain.com\"] need ACME certificates generation for domains \"myhost.mydomain.com\"."
time="2019-07-11T16:03:39Z" level=debug msg="Looking for provided certificate(s) to validate [\"myhost.mydomain.com\"]..."
time="2019-07-11T16:03:39Z" level=debug msg="Domains [\"myhost.mydomain.com\"] need ACME certificates generation for domains \"myhost.mydomain.com\"."
time="2019-07-11T16:03:39Z" level=debug msg="Looking for provided certificate(s) to validate [\"myhost.mydomain.com\"]..."
time="2019-07-11T16:03:39Z" level=debug msg="Domains [\"myhost.mydomain.com\"] need ACME certificates generation for domains \"myhost.mydomain.com\"."
time="2019-07-11T16:03:39Z" level=debug msg="Looking for provided certificate(s) to validate [\"matsstack-mongo.1.kjoekf19fyw5ru0fr1azazzu5.myhost.mydomain.com\"]..."
time="2019-07-11T16:03:39Z" level=debug msg="Domains [\"matsstack-mongo.1.kjoekf19fyw5ru0fr1azazzu5.myhost.mydomain.com\"] need ACME certificates generation for domains \"matsstack-mongo.1.kjoekf19fyw5ru0fr1azazzu5.myhost.mydomain.com\"."
time="2019-07-11T16:03:39Z" level=debug msg="Loading ACME certificates [myhost.mydomain.com]..."
time="2019-07-11T16:03:39Z" level=info msg="The key type is empty. Use default key type 4096."
time="2019-07-11T16:03:39Z" level=debug msg="Looking for provided certificate(s) to validate [\"myhost.mydomain.com\"]..."
time="2019-07-11T16:03:39Z" level=debug msg="No ACME certificate generation required for domains [\"myhost.mydomain.com\"]."
time="2019-07-11T16:03:39Z" level=debug msg="Loading ACME certificates [myhost.mydomain.com]..."
time="2019-07-11T16:03:39Z" level=debug msg="Loading ACME certificates [myhost.mydomain.com]..."
time="2019-07-11T16:03:39Z" level=debug msg="Loading ACME certificates [myhost.mydomain.com]..."
time="2019-07-11T16:03:39Z" level=debug msg="Loading ACME certificates [matsstack-mongo.1.kjoekf19fyw5ru0fr1azazzu5.myhost.mydomain.com]..."
time="2019-07-11T16:03:40Z" level=debug msg="Building ACME client..."
time="2019-07-11T16:03:40Z" level=debug msg="https://acme-v02.api.letsencrypt.org/directory"
time="2019-07-11T16:03:40Z" level=error msg="Unable to obtain ACME certificate for domains \"myhost.mydomain.com\" detected thanks to rule \"Host:myhost.mydomain.com;PathPrefixStrip:/\" : cannot get ACME client get directory at 'https://acme-v02.api.letsencrypt.org/directory': Get https://acme-v02.api.letsencrypt.org/directory: dial tcp 23.55.128.36:443: connect: connection refused"
time="2019-07-11T16:03:40Z" level=debug msg="Building ACME client..."
time="2019-07-11T16:03:40Z" level=debug msg="https://acme-v02.api.letsencrypt.org/directory"
time="2019-07-11T16:03:40Z" level=error msg="Unable to obtain ACME certificate for domains \"myhost.mydomain.com\" detected thanks to rule \"Host:myhost.mydomain.com;PathPrefix:/app1{id:[0-9]?}\" : cannot get ACME client get directory at 'https://acme-v02.api.letsencrypt.org/directory': Get https://acme-v02.api.letsencrypt.org/directory: dial tcp 23.55.128.36:443: connect: connection refused"
ime="2019-07-11T16:03:40Z" level=debug msg="Building ACME client..."
time="2019-07-11T16:03:40Z" level=debug msg="https://acme-v02.api.letsencrypt.org/directory"
time="2019-07-11T16:03:40Z" level=error msg="Unable to obtain ACME certificate for domains \"myhost.mydomain.com\" detected thanks to rule \"Host:myhost.mydomain.com;PathPrefix:/app3{id:[0-9]?}\" : cannot get ACME client get directory at 'https://acme-v02.api.letsencrypt.org/directory': Get https://acme-v02.api.letsencrypt.org/directory: dial tcp 23.55.128.36:443: connect: connection refused"
time="2019-07-11T16:03:40Z" level=debug msg="Building ACME client..."
time="2019-07-11T16:03:40Z" level=debug msg="https://acme-v02.api.letsencrypt.org/directory"
time="2019-07-11T16:03:40Z" level=error msg="Unable to obtain ACME certificate for domains \"myhost.mydomain.com\" detected thanks to rule \"Host:myhost.mydomain.com;PathPrefix:/app2{id:[0-9]?}\" : cannot get ACME client get directory at 'https://acme-v02.api.letsencrypt.org/directory': Get https://acme-v02.api.letsencrypt.org/directory: dial tcp 23.55.128.36:443: connect: connection refused"
time="2019-07-11T16:03:40Z" level=debug msg="Building ACME client..."
time="2019-07-11T16:03:40Z" level=debug msg="https://acme-v02.api.letsencrypt.org/directory"
time="2019-07-11T16:03:40Z" level=error msg="Unable to obtain ACME certificate for domains \"matsstack-mongo.1.kjoekf19fyw5ru0fr1azazzu5.myhost.mydomain.com\" detected thanks to rule \"Host:matsStack-mongo.1.kjoekf19fyw5ru0fr1azazzu5.myhost.mydomain.com\" : cannot get ACME client get directory at 'https://acme-v02.api.letsencrypt.org/directory': Get https://acme-v02.api.letsencrypt.org/directory: dial tcp 23.55.128.36:443: connect: connection refused"
I expected the certificates to be obtained and for the challenge to work, instead SSL does not work properly.
I am trying Docker on CoreOS on EC2.
What I want to do is:
Run Docker private registry container
Run other containers after pulling image from private registry
Initial Configuration
My cloud-config.yml is like this:
#cloud-config
coreos:
units:
- name: etcd2.service
command: start
- name: fleet.service
command: start
- name: docker.service
command: start
drop-ins:
- name: 50-insecure-registry.conf
content: |
[Service]
Environment=DOCKER_OPTS='--insecure-registry="localhost:5000"'
- name: private-docker-registry.service
command: start
runtime: true
content: |
[Unit]
Description=Docker Private Registry
After=docker.service
Requires=docker.service
Requires=network-online.target
After=network-online.target
[Service]
ExecStartPre=/usr/bin/docker pull registry:latest
ExecStart=/usr/bin/docker run --name private-docker-registry --privileged -e SETTINGS_FLAVOR=s3 -e AWS_BUCKET=bucket -e AWS_KEY=awskey -e AWS_SECRET=awssecret -e SEARCH_BACKEND=sqlalchemy -p 5000:5000 registry:latest
- name: myservice.service
command: start
runtime: true
content: |
[Unit]
Description=My Service
After=private-docker-registry.service
Requires=private-docker-registry.service
Requires=network-online.target
After=network-online.target
[Service]
ExecStartPre=/usr/bin/docker pull localhost:5000/myservice:latest
ExecStart=/usr/bin/docker run --name myservice localhost:5000/myservice:latest
myservice.service fails
Problem here is:
myservice.service fails though private registry container is successfully running
As I login to the machine, it shows following message.
Failed Units: 1
myservice.service
Command journalctl -u private-docker-registry.service shows this:
Jul 24 07:30:25 docker[830]: [2015-07-24 07:30:25 +0000] [1] [INFO] Listening at: http://0.0.0.0:5000 (1)
Command journalctl -u myservice.service shows following log.
Jul 24 07:30:25 systemd[1]: Starting My Service...
Jul 24 07:30:25 docker[836]: time="2015-07-24T07:30:25Z" level=fatal msg="Error response from daemon: v1 ping attempt failed with error: Get http://localhost:5000/v1/_ping: dial tcp 127.0.0.1:5000: connection refused"
Jul 24 07:30:25 systemd[1]: myservice.service: Control process exited, code=exited status=1
Jul 24 07:30:25 systemd[1]: Failed to start My Service.
Jul 24 07:30:25 systemd[1]: myservice.service: Unit entered failed state.
Jul 24 07:30:25 systemd[1]: myservice.service: Failed with result 'exit-code'.
However, I can run myservice container manually (after few minutes).
docker run --name myservice localhost:5000/myservice:latest
My assumption is:
Pulling myservice image fails because myservice.service tries to pull myservice image immediately after private registry starts listening.
Try & Error
Based on my assumption above, I added wait-for-registry.service which just wait 2 minutes after private registry starts.
#cloud-config
coreos:
units:
- name: etcd2.service
command: start
- name: fleet.service
command: start
- name: docker.service
command: start
drop-ins:
- name: 50-insecure-registry.conf
content: |
[Service]
Environment=DOCKER_OPTS='--insecure-registry="localhost:5000"'
- name: private-docker-registry.service
command: start
runtime: true
content: |
[Unit]
Description=Docker Private Registry
After=docker.service
Requires=docker.service
Requires=network-online.target
After=network-online.target
[Service]
ExecStartPre=/usr/bin/docker pull registry:latest
ExecStart=/usr/bin/docker run --name private-docker-registry --privileged -e SETTINGS_FLAVOR=s3 -e AWS_BUCKET=bucket -e AWS_KEY=awskey -e AWS_SECRET=awssecret -e SEARCH_BACKEND=sqlalchemy -p 5000:5000 registry:latest
- name: wait-for-registry.service
command: start
runtime: true
content: |
[Unit]
Description=Wait Until Private Registry is Ready
After=private-docker-registry.service
Requires=private-docker-registry.service
[Service]
ExecStart=/usr/bin/sleep 120
- name: myservice.service
command: start
runtime: true
content: |
[Unit]
Description=My Service
After=wait-for-registry.service
After=private-docker-registry.service
Requires=private-docker-registry.service
Requires=network-online.target
After=network-online.target
[Service]
ExecStartPre=/usr/bin/docker pull localhost:5000/myservice:latest
ExecStart=/usr/bin/docker run --name myservice localhost:5000/myservice:latest
But this causes same problem.
Command journalctl -u private-docker-registry.service shows this:
Jul 24 08:23:38 docker[838]: [2015-07-24 08:23:38 +0000] [1] [INFO] Listening at: http://0.0.0.0:5000 (1)
Command journalctl -u wait-for-registry.service shows this:
Jul 24 08:23:37 systemd[1]: Started Wait Until Private Registry is Ready.
Jul 24 08:23:37 systemd[1]: Starting Wait Until Private Registry is Ready...
Command journalctl -u myservice.service shows this:
Jul 24 08:23:37 systemd[1]: Starting My Service...
Jul 24 08:23:37 docker[847]: time="2015-07-24T08:23:37Z" level=fatal msg="Error response from daemon: v1 ping attempt failed with error: Get http://localhost:5000/v1/_ping: dial tcp 127.0.0.1
Jul 24 08:23:37 systemd[1]: myservice.service: Control process exited, code=exited status=1
Jul 24 08:23:37 systemd[1]: Failed to start My Service.
Jul 24 08:23:37 systemd[1]: myservice.service: Unit entered failed state.
Jul 24 08:23:37 systemd[1]: myservice.service: Failed with result 'exit-code'.
It seems that sleep takes no effect.
Question
How can I make it wait until private registry will be available?
Any hints or suggestions welcome!
Thank you:)
systemd unit files are tricky :-)
I think you just about have it. I am no expert, but, I will try to explain what I think is happening.
First, I think you might want to add a :
- name: wait-for-registry.service
command: start
runtime: true
content: |
[Unit]
Description=Wait Until Private Registry is Ready
After=private-docker-registry.service
Requires=private-docker-registry.service
[Service]
ExecStart=/usr/bin/sleep 120
RemainAfterExit=true
Type=oneshot
The explaination would be that /usr/bin/sleep 120 is started. Since it is started, the next Unit in the chain is started (your myservice.service). By changing it to a oneshot you have to wait until it is finished. I am guessing here, though, because much of the unit stuff is trial and error for me.
I do have a similar construct in my unit files. I don't think you really want 'sleep', that is a hack. I think you really want to wait until port 5000 is answering, right? If that is the case, you can replace the sleep with:
ExecStart=/usr/bin/bash /opt/bin/waiter.sh
Then, towards the top of the cloud-config:
write_files:
- path: /opt/bin/waiter.sh
permissions: 0755
owner: root
content: |
#! /usr/bin/bash
until curl -s http://127.0.0.1:5000/; do echo waiting waiter.sh; sleep 2; done
Or something similar. Wait until there is something at that port before continuing.
-g