How do I deploy a Docker app without publishing it? - windows

How do I deploy a Docker app without publishing it to their hub? I don't want to create a username and password on their service (they just want to trap flies in their ecosystem), and I don't think I will use the swarm part of Docker. Besides that, it sounds very insecure to publish your closed-source code on a public repository! However I want to see how it works and want to learn the stack part, which depends on the swarm part. I followed their tutorial, but the app only deployed on the local default master node.
https://docs.docker.com/get-started/part4/#deploy-the-app-on-the-swarm-manager
docker-composer.yml
...
# replace username/repo:tag with your name and image details
image: friendlyhello
3 machines/nodes with 1 master node
C:\Temp\docker-tutorial>docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default * virtualbox Running tcp://192.168.99.100:2376 v18.03.1-ce
myvm1 - virtualbox Running tcp://192.168.99.101:2376 v18.03.1-ce
myvm2 - virtualbox Running tcp://192.168.99.102:2376 v18.03.1-ce
The app is deployed with 6 instances.
C:\Temp\docker-tutorial>docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
uvsxf1q7brhb getstartedlab_web replicated 6/6 friendlyhello:latest *:80->80/tcp
However the app only fell onto the default master node and none of the swarm nodes.
C:\Temp\docker-tutorial>docker service ps getstartedlab_web
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
PORTS
6jh1ua0wjyzi getstartedlab_web.1 friendlyhello:latest default Running Running about an hour ago
to14hu7g3rhz \_ getstartedlab_web.1 friendlyhello:latest myvm1 Shutdown Rejected about an hour ago "No such image: friendlyhello:"
ek91tcdj61nv \_ getstartedlab_web.1 friendlyhello:latest myvm1 Shutdown Rejected about an hour ago "No such image: friendlyhello:"
jwdvuf89a640 \_ getstartedlab_web.1 friendlyhello:latest myvm2 Shutdown Rejected about an hour ago "No such image: friendlyhello:"
xrp0rim67ipi getstartedlab_web.2 friendlyhello:latest default Running Running about an hour ago
tp008eoj2mpk getstartedlab_web.3 friendlyhello:latest default Running Running about an hour ago
w6wyk3nj53zv \_ getstartedlab_web.3 friendlyhello:latest myvm2 Shutdown Rejected about an hour ago "No such image: friendlyhello:"
7ts6aqianz7l \_ getstartedlab_web.3 friendlyhello:latest myvm1 Shutdown Rejected about an hour ago "No such image: friendlyhello:"
gjt1qks57rud \_ getstartedlab_web.3 friendlyhello:latest myvm1 Shutdown Rejected about an hour ago "No such image: friendlyhello:"
o05u4qwt12vq getstartedlab_web.4 friendlyhello:latest default Running Running about an hour ago
ifzmmy8ru443 \_ getstartedlab_web.4 friendlyhello:latest myvm1 Shutdown Rejected about an hour ago "No such image: friendlyhello:"
jnxn8gs3bte3 \_ getstartedlab_web.4 friendlyhello:latest myvm2 Shutdown Rejected about an hour ago "No such image: friendlyhello:"
xsooht9gpf01 \_ getstartedlab_web.4 friendlyhello:latest myvm2 Shutdown Rejected about an hour ago "No such image: friendlyhello:"
v23mjl8n3yyd getstartedlab_web.5 friendlyhello:latest default Running Running about an hour ago
meocennltdph getstartedlab_web.6 friendlyhello:latest default Running Running about an hour ago
3t78bpswwuyw \_ getstartedlab_web.6 friendlyhello:latest myvm2 Shutdown Rejected about an hour ago "No such image: friendlyhello:"
y3ih3md932qo \_ getstartedlab_web.6 friendlyhello:latest myvm2 Shutdown Rejected about an hour ago "No such image: friendlyhello:"
sqsngkq1440a \_ getstartedlab_web.6 friendlyhello:latest myvm1 Shutdown Rejected about an hour ago "No such image: friendlyhello:"
Docker version 18.03.0-ce, build 0520e24302, Windows 8.1
I tried to follow
https://github.com/docker/docker-registry/blob/master/README.md#quick-start
https://docs.docker.com/registry/#basic-commands
https://blog.docker.com/2013/07/how-to-use-your-own-registry/
I set this line in docker-compose.yml
image: 192.168.99.100:5000/get-started:part2
But after I ran docker stack deploy it still failed!
C:\Temp\docker-tutorial>docker stack deploy -c docker-compose.yml getstartedlab
Creating network getstartedlab_webnet
Creating service getstartedlab_web
C:\Temp\docker-tutorial>docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
jjr7cuqy2i54 getstartedlab_web replicated 0/6 192.168.99.100:5000/get-started:part2 *:80->80/tcp
C:\Temp\docker-tutorial>docker service ps getstartedlab_web
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
PORTS
bsx3slkj8pbr getstartedlab_web.1 192.168.99.100:5000/get-started:part2 myvm1 Ready Rejected 3 seconds ago "No such image: 192.168.99.100"
cusqg0p35cwp \_ getstartedlab_web.1 192.168.99.100:5000/get-started:part2 default Shutdown Rejected 8 seconds ago "No such image: 192.168.99.100"
...
The image is in 'localhost' but not 192.168.99.100.
C:\Temp\docker-tutorial>docker pull localhost:5000/get-started:part2
part2: Pulling from get-started
Digest: sha256:fedc2e7c01a45dab371cf4e01b7f8854482b33564c52d2c725f52f787f91dbcb
Status: Image is up to date for localhost:5000/get-started:part2
C:\Temp\docker-tutorial>docker pull 192.168.99.100:5000/get-started:part2
Error response from daemon: Get https://192.168.99.100:5000/v2/: http: server gave HTTP response to HTTPS client
localhost:5000 refuses to connect in the browser. I also tried localhost:5000/get-started:part2 as the image name, but that also failed.

You can host your own docker container registry or use private container registries from many cloud-providers with your custom auth. Few options:
AWS ECR / Amazon Elastic Container Registry: https://aws.amazon.com/ecr/
Azure Container Registry: https://azure.microsoft.com/en-us/services/container-registry/
Codefresh private docker registries: https://codefresh.io/
Artifactory: https://www.jfrog.com/confluence/display/RTF/Docker+Registry
If you want to have complete control, you can alternatively host your own Docker Registry as well:
https://github.com/docker/docker-registry/blob/master/README.md
https://blog.docker.com/2013/07/how-to-use-your-own-registry/
Once you setup your registry you can just simply authenticate with docker login and then manage your images with docker push/pull as usual.

Related

Kubernetes windows worker node addition: "failed to create containerd task: hcsshim::CreateComputeSystem kube-proxy: The directory name is invalid"

I am using the Kubernetes(v1.23.13) with the container and Flannel CNI. The Kubernetes cluster created on ubuntu (v 18) VM(vmware esxi) and windows server running on another VM. I follow the link below to add the windows(windows server 2019) node to the cluster. Windows node added the cluster. But the windows kube-proxy and demonset pod deployment has failed.
Link https://web.archive.org/web/20220530090758/https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/
Error: Normal Created (x5 over ) kubelet Created container kube-proxy
Normal Pulled (x5 over ) kubelet Container image "sigwindowstools/kube-proxy:v1.23.13-nanoserver" already present on machine
Warning Failed kubelet Error: failed to create containerd task: hcsshim::CreateComputeSystem kube-proxy: The directory name is invalid.
(extra info: {"Owner":"containerd-shim-runhcs-v1.exe","SchemaVersion":{"Major":2,"Minor":1},"Container":{"GuestOs":{"HostName":"kube-proxy-windows-hq7bb"},"Storage":{"Layers":[{"Id":"e30f10e1-6696-5df6-af3f-156a372bce4e","Path":"C:\ProgramData\containerd\root\io.containerd.snapshotter.v1.windows\snapshots\19"},{"Id":"8aa59a8b-78d3-5efe-a3d9-660bd52fd6ce","Path":"C:\ProgramData\containerd\root\io.containerd.snapshotter.v1.windows\snapshots\18"},{"Id":"f222f973-9869-5b65-a546-cb8ae78a32b9","Path":"C:\ProgramData\containerd\root\io.containerd.snapshotter.v1.windows\snapshots\17"},{"Id":"133385ae-6df6-509b-b342-bc46338b3df4","Path":"C:\ProgramData\containerd\root\io.containerd.snapshotter.v1.windows\snapshots\16"},{"Id":"f6f9524c-e3f0-5be2-978d-7e09e0b21299","Path":"C:\ProgramData\containerd\root\io.containerd.snapshotter.v1.windows\snapshots\15"},{"Id":"0d9d58e6-47b6-5091-a552-7cc2027ca06f","Path":"C:\ProgramData\containerd\root\io.containerd.snapshotter.v1.windows\snapshots\14"},{"Id":"6715ca06-295b-5fba-9224-795ca5af71b9","Path":"C:\ProgramData\containerd\root\io.containerd.snapshotter.v1.windows\snapshots\13"},{"Id":"75e64a3b-69a5-52cf-b39f-ee05718eb1e2","Path":"C:\ProgramData\containerd\root\io.containerd.snapshotter.v1.windows\snapshots\12"},{"Id":"8698c4b4-b092-57c6-b1eb-0a7ca14fcf4e","Path":"C:\ProgramData\containerd\root\io.containerd.snapshotter.v1.windows\snapshots\11"},{"Id":"7c9a6fb7-2ca8-5ef7-bbfe-cabbff23cfa4","Path":"C:\ProgramData\containerd\root\io.containerd.snapshotter.v1.windows\snapshots\10"},{"Id":"a10d4ad8-f2b1-5fd6-993f-7aa642762865","Path":"C:\ProgramData\containerd\root\io.containerd.snapshotter.v1.windows\snapshots\9"}],"Path":"\\?\Volume{64336318-a64f-436e-869c-55f9f8e4ea62}\"},"MappedDirectories":[{"HostPath":"c:\","ContainerPath":"c:\host"},{"HostPath":"c:\var\lib\kubelet\pods\1cd0c333-3cd0-4c90-9d22-884ea73e8b69\containers\kube-proxy\0e58a001","ContainerPath":"c:\dev\termination-log"},{"HostPath":"c:\var\lib\kubelet\pods\1cd0c333-3cd0-4c90-9d22-884ea73e8b69\volumes\kubernetes.io~configmap\kube-proxy","ContainerPath":"c:\var\lib\kube-proxy","ReadOnly":true},{"HostPath":"c:\var\lib\kubelet\pods\1cd0c333-3cd0-4c90-9d22-884ea73e8b69\volumes\kubernetes.io~configmap\kube-proxy-windows","ContainerPath":"c:\var\lib\kube-proxy-windows","ReadOnly":true},{"HostPath":"c:\var\lib\kubelet\pods\1cd0c333-3cd0-4c90-9d22-884ea73e8b69\volumes\kubernetes.io~projected\kube-api-access-4zs46","ContainerPath":"c:\var\run\secrets\kubernetes.io\serviceaccount","ReadOnly":true},{"HostPath":"c:\var\lib\kubelet\pods\1cd0c333-3cd0-4c90-9d22-884ea73e8b69\etc-hosts","ContainerPath":"C:\Windows\System32\drivers\etc\hosts"}],"MappedPipes":[{"ContainerPipeName":"rancher_wins","HostPath":"\\.\pipe\rancher_wins"}],"Networking":{"Namespace":"4a4d0354-251a-4750-8251-51ae42707db2"}},"ShouldTerminateOnLastHandleClosed":true}): unknown
Warning BackOff (x23 over ) kubelet Back-off restarting failed container
kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-64897985d-2mkd5 1/1 Running 0 19h
kube-system coredns-64897985d-qhhbz 1/1 Running 0 19h
kube-system etcd-scspa2658542001 1/1 Running 2 19h
kube-system kube-apiserver-scspa2658542001 1/1 Running 8 (3h4m ago) 19h
kube-system kube-controller-manager-scspa2658542001 1/1 Running 54 (126m ago) 19h
kube-system kube-flannel-ds-hjw8s 1/1 Running 14 (18h ago) 19h
kube-system kube-flannel-ds-windows-amd64-xfhjl 0/1 ImagePullBackOff 0 29m
kube-system kube-proxy-windows-hq7bb 0/1 CrashLoopBackOff 10 (<invalid> ago) 29m
kube-system kube-proxy-wx2x9 1/1 Running 0 19h
kube-system kube-scheduler-scspa2658542001 1/1 Running 92 (153m ago) 19h
From this issue, it seems windows nodes with flannel has issues they have solved with different work arounds,
As mentioned in the issue they have made a guide to work windows properly, Follow this doc with the installation guide and requirements.
Attaching troubleshooting blog and issue for crashloop backoff.
I had a similar error failed to create containerd task: hcsshim::CreateComputeSystem with flannel on k8s v1.24. The cause was that Windows OS patches had not been applied. You must have applied the patch related to KB4489899.
https://github.com/kubernetes-sigs/sig-windows-tools/blob/master/guides/guide-for-adding-windows-node.md#before-you-begin

Docker service container, error : Service 'w3svc' has been stopped

I'm trying to create multiple containers using "Docker swarm" on Windows Server 2016 my service has been created and replicated but it's not stable it's running with an error.
PS C:\Users\tmman\Desktop\stackdeploy> docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
xw6kqqu7o4ad demo_db replicated 1/1 microsoft/mssql-server-windows-express:latest
kkrpxwiytax9 demo_web replicated 1/1 microsoft/iis:latest *:80->80/tcp
PS C:\Users\tmman\Desktop\stackdeploy> docker service ps demo_web demo_db
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
1s4ybqny71sd demo_web.1 microsoft/iis:latest DELEI4127 Running Starting 2 seconds ago
uohf736ux1ne \_ demo_web.1 microsoft/iis:latest DELEI4127 Shutdown Failed 7 seconds ago "task: non-zero exit (21479438…"
owpguwtbpdxc \_ demo_web.1 microsoft/iis:latest DELEI4127 Shutdown Failed 16 seconds ago "starting container failed: co…"
dg54mihkflbx \_ demo_web.1 microsoft/iis:latest DELEI4127 Shutdown Failed 25 seconds ago "task: non-zero exit (21475000…"
7enznbiqjfp5 \_ demo_web.1 microsoft/iis:latest DELEI4127 Shutdown Failed 37 seconds ago "starting container failed: co…"
o541pdex9s3p demo_db.1 microsoft/mssql-server-windows-express:latest DELEI4127 Running Running 11 minutes ago
PS C:\Users\tmman\Desktop\stackdeploy> docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
017afe0a6211 microsoft/iis:latest "C:\\ServiceMonitor.e…" 18 seconds ago Up 10 seconds 80/tcp demo_web.1.1s4ybqny71sds3i9h47i59g4s
5889ac4ef8d2 microsoft/iis:latest "C:\\ServiceMonitor.e…" 27 seconds ago Exited (2147943855) 19 seconds ago demo_web.1.uohf736ux1neaz5u0p1d73jx4
dce80549e789 microsoft/iis:latest "C:\\ServiceMonitor.e…" 37 seconds ago Created 80/tcp demo_web.1.owpguwtbpdxc50em7a3rc8m05
92721722311c microsoft/iis:latest "C:\\ServiceMonitor.e…" 48 seconds ago Exited (2147500037) 38 seconds ago demo_web.1.dg54mihkflbxt2j0wd6tv8qlt
166d29256771 microsoft/iis:latest "C:\\ServiceMonitor.e…" 11 minutes ago Created 80/tcp demo_web.1.7enznbiqjfp5hdvjbgvnj1q1v
fbc3deb1930e microsoft/mssql-server-windows-express:latest "powershell -Command…" 11 minutes ago Up 11 minutes demo_db.1.o541pdex9s3pmaaty8xzkezpx
Error:
PS C:\Users\tmman\Desktop\stackdeploy> docker service logs demo_web
demo_web.1.1vwqd9xgvfd3#DELEI4127 |
demo_web.1.1vwqd9xgvfd3#DELEI4127 | Service 'w3svc' has been stopped
demo_web.1.1vwqd9xgvfd3#DELEI4127 |
demo_web.1.8vs509jzpwb9#DELEI4127 |
demo_web.1.8vs509jzpwb9#DELEI4127 | Service 'w3svc' has been stopped
demo_web.1.8vs509jzpwb9#DELEI4127 |
demo_web.1.8vs509jzpwb9#DELEI4127 | Failed to update IIS configuration
demo_web.1.1vwqd9xgvfd3#DELEI4127 | Failed to update IIS configuration
demo_web.1.z7yhqf1wqqiu#DELEI4127 |
demo_web.1.z7yhqf1wqqiu#DELEI4127 | Service 'w3svc' has been stopped
demo_web.1.z7yhqf1wqqiu#DELEI4127 |
demo_web.1.pt4du3jr20nj#DELEI4127 |
demo_web.1.pt4du3jr20nj#DELEI4127 | Service 'w3svc' has been stopped
demo_web.1.pt4du3jr20nj#DELEI4127 |
demo_web.1.z7yhqf1wqqiu#DELEI4127 | Failed to update IIS configuration
demo_web.1.pt4du3jr20nj#DELEI4127 | Failed to update IIS configuration
PS C:\Users\tmman\Desktop\stackdeploy> docker info
Containers: 6
Running: 2
Paused: 0
Stopped: 4
Images: 3
Server Version: 18.09.2
Storage Driver: windowsfilter
Windows:
Logging Driver: json-file
Plugins:
Volume: local
Network: ics l2bridge l2tunnel nat null overlay transparent
Log: awslogs etwlogs fluentd gelf json-file local logentries splunk syslog
Swarm: active
NodeID: 1oef3sisa3el3q46tz7aj78eu
Is Manager: true
ClusterID: k4l6x9b42bg7bfnf630g3g2h1
Managers: 1
Nodes: 1
Default Address Pool: 10.0.0.0/8
SubnetSize: 24
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 10
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Autolock Managers: false
Root Rotation In Progress: false
Node Address: 10.49.2.50
Manager Addresses:
10.49.2.50:2377
Default Isolation: process
Kernel Version: 10.0 14393 (14393.2848.amd64fre.rs1_release.190305-1856)
Operating System: Windows Server 2016 Standard Version 1607 (OS Build 14393.2848)
OSType: windows
Architecture: x86_64
CPUs: 8
Total Memory: 7.999GiB
Name: DELEI4127
ID: 5BCA:25XO:U3TH:JYXD:BULV:MWXO:UWYF:APIZ:AVCF:R2QO:6KFL:A5SE
Docker Root Dir: C:\ProgramData\docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
my Dockerfile :
FROM microsoft/iis:nanoserver
WORKDIR /inetpub/wwwroot
RUN powershell -Command `
Add-WindowsFeature Web-Server;
Invoke-WebRequest -UseBasicParsing -Uri "https://dotnetbinaries.blob.core.windows.net/servicemonitor/2.0.1.6/ServiceMonitor.exe" -OutFile "C:\ServiceMonitor.exe"
EXPOSE 80
ENTRYPOINT ["C:\\ServiceMonitor.exe", "w3svc"]
COPY index.html /index.html
my YML.file:
version: "3"
services:
db:
image: microsoft/mssql-server-windows-express
networks:
- cpxnet
deploy:
environment:
- SA_PASSWORD=Abcd1234
- ACCEPT_EULA=Y
web:
image: microsoft/iis:latest
networks:
- cpxnet
deploy:
resources:
limits:
memory: 50M
ports:
- "80:80"
depends_on:
- db
networks:
cpxnet:
I got some solutions from this website: https://github.com/Microsoft/aspnet-docker/issues/64 but didn't help with my error
Thank you in advance for your help!!
Note!!!: i'm a beginner
I faced similar issue n had almost similar dockerfile.
To troubleshoot I changed the entrypoint to continuous ping to local host. So that I could login to container n check the issue with w3svc.
I found strange multiple events in event viewer (need to view in powershell) that w3svc is started successfully. There were no stop event. But I concluded that something wrong with my service.
I further checked n found that app pool is in "OnDemand" state instead of "always running". I set the apppool property to always running in my dockerfile. Post that switched entrypoint to original value "service monitor.exe w3svc" n it made my container stable.
Note - I m not using default iis image where service monitor already set as entrypoint.

minikube error trying to reach 172.17.0.4:8080 on osx

I'm doing the kubernetes tutorial locally with minikube on osx. In https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive/ step 3, I get the error
% curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/proxy/
Error: 'dial tcp 172.17.0.4:8080: getsockopt: connection refused'
Trying to reach: 'http://172.17.0.4:8080/'%
any idea why this doesn't work locally? the simpler request does work
% curl http://localhost:8001/version
{
"major": "1",
"minor": "10",
"gitVersion": "v1.10.0",
"gitCommit": "fc32d2f3698e36b93322a3465f63a14e9f0eaead",
"gitTreeState": "clean",
"buildDate": "2018-03-26T16:44:10Z",
"goVersion": "go1.9.3",
"compiler": "gc",
"platform": "linux/amd64"
info
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kubernetes-bootcamp-74f58d6b87-ntn5r 0/1 ImagePullBackOff 0 21h
logs
$ kubectl logs $POD_NAME
Error from server (BadRequest): container "kubernetes-bootcamp" in pod "kubernetes-bootcamp-74f58d6b87-w4zh8" is waiting to start: trying and failing to pull image
so then the run command is starting the node but the pod crashes? why?
$ kubectl run kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1 --port=8080
I can pull the image without a problem
$ docker pull gcr.io/google-samples/kubernetes-bootcamp:v1
v1: Pulling from google-samples/kubernetes-bootcamp
5c90d4a2d1a8: Pull complete
ab30c63719b1: Pull complete
29d0bc1e8c52: Pull complete
d4fe0dc68927: Pull complete
dfa9e924f957: Pull complete
Digest: sha256:0d6b8ee63bb57c5f5b6156f446b3bc3b3c143d233037f3a2f00e279c8fcc64af
Status: Downloaded newer image for gcr.io/google-samples/kubernetes-bootcamp:v1
describe
$ kubectl describe pods
Name: kubernetes-bootcamp-74f58d6b87-w4zh8
Namespace: default
Node: minikube/10.0.2.15
Start Time: Tue, 24 Jul 2018 15:05:00 -0400
Labels: pod-template-hash=3091482643
run=kubernetes-bootcamp
Annotations: <none>
Status: Pending
IP: 172.17.0.3
Controlled By: ReplicaSet/kubernetes-bootcamp-74f58d6b87
Containers:
kubernetes-bootcamp:
Container ID:
Image: gci.io/google-samples/kubernetes-bootcamp:v1
Image ID:
Port: 8080/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-wp28q (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-wp28q:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-wp28q
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal BackOff 23m (x281 over 1h) kubelet, minikube Back-off pulling image "gci.io/google-samples/kubernetes-bootcamp:v1"
Warning Failed 4m (x366 over 1h) kubelet, minikube Error: ImagePullBackOff
Minikube is a tool that makes it easy to run Kubernetes locally.
Minikube runs a single-node Kubernetes
cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.
Back to your issue. Have you checked if you provided enough resources to run Minikube environment?
You may try to run minikube and force allocate more memory:
minikube start --memory 4096
For further analysis, please provide information about resources dedicated to this installation and
type of hypervisor you use.
Sounds like a networking issue. Your VM is unable to pull the images from gcr.io:443.
Here's what your kubectl describe pods kubernetes-bootcamp-xxx should looks like:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m default-scheduler Successfully assigned kubernetes-bootcamp-5c69669756-xbbmn to minikube
Normal SuccessfulMountVolume 5m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-cfq65"
Normal Pulling 5m kubelet, minikube pulling image "gcr.io/google-samples/kubernetes-bootcamp:v1"
Normal Pulled 5m kubelet, minikube Successfully pulled image "gcr.io/google-samples/kubernetes-bootcamp:v1"
Normal Created 5m kubelet, minikube Created container
Normal Started 5m kubelet, minikube Started container
Normal SuccessfulMountVolume 1m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-cfq65"
Normal SandboxChanged 1m kubelet, minikube Pod sandbox changed, it will be killed and re-created.
Normal Pulled 1m kubelet, minikube Container image "gcr.io/google-samples/kubernetes-bootcamp:v1" already present on machine
Normal Created 1m kubelet, minikube Created container
Normal Started 1m kubelet, minikube Started container
Try this from your host, to narrow down if it's a networking issue with your VM or your host machine:
$ docker pull gcr.io/google-samples/kubernetes-bootcamp:v1
v1: Pulling from google-samples/kubernetes-bootcamp
5c90d4a2d1a8: Pull complete
ab30c63719b1: Pull complete
29d0bc1e8c52: Pull complete
d4fe0dc68927: Pull complete
dfa9e924f957: Pull complete
Digest: sha256:0d6b8ee63bb57c5f5b6156f446b3bc3b3c143d233037f3a2f00e279c8fcc64af
Status: Downloaded newer image for gcr.io/google-samples/kubernetes-bootcamp:v1

How do you use a private registry with Docker?

I followed the tutorial
https://docs.docker.com/get-started/part4/#deploy-the-app-on-the-swarm-manager
And created my own registry using
https://github.com/docker/docker-registry/blob/master/README.md#quick-start
https://docs.docker.com/registry/#basic-commands
https://blog.docker.com/2013/07/how-to-use-your-own-registry/
However it fails to deploy on the worker nodes with the error "No such image: 192.168.99.100". What is wrong?
docker run -d -p 5000:5000 --name registry registry:2
docker tag friendlyhello 192.168.99.100:5000/get-started:part2
docker push 192.168.99.100:5000/get-started # Get https://192.168.99.100:5000/v2/: http: server gave HTTP response to HTTPS client
docker tag friendlyhello localhost:5000/get-started:part2
docker push localhost:5000/get-started:part2
docker stack deploy -c docker-compose.yml getstartedlab
docker service ps getstartedlab_web
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
PORTS
o4nbsqccqlm4 getstartedlab_web.1 192.168.99.100:5000/get-started:part2 default Running Running 17 minutes ago
qcjtq3gqag9j \_ getstartedlab_web.1 192.168.99.100:5000/get-started:part2 myvm1 Shutdown Rejected 17 minutes ago "No such image: 192.168.99.100â?▌"
This is my docker-compose.yml file:
...
image: 192.168.99.100:5000/get-started:part2
...
I tried to use image: localhost:5000/get-started:part2 in the docker-compose.yml file also, but it gave the error No such image: localhost:5000.
docker stack rm getstartedlab
docker stack deploy -c docker-compose.yml getstartedlab
docker service ps getstartedlab_web
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
PORTS
k2cck1p7wpg1 getstartedlab_web.1 localhost:5000/get-started:part2 default Running Running 10 seconds ago
69km7zabgw6l \_ getstartedlab_web.1 localhost:5000/get-started:part2 myvm1 Shutdown Rejected 21 seconds ago "No such image: localhost:5000â?▌"
Windows 8.1, Docker version 18.03.0-ce, build 0520e24302

creation of container using docker-compose deletes another container that is already running

I am trying to start 2 separate containers using the docker-compose command based on 2 different images.
One image (work) is based on code worked on in "development". A second image (cons) image is created by code that is currently at the "consolidation" level.
When starting the first container, all seems to go OK.
Details of above image are here:
WORK DIRECTORY: ~/apps/django.work/extraction/docker-compose.yml
version: '3'
services:
db:
image: postgres
container_name: postgres-work
web:
build: .
image: apostx-cc-backoffice-work
container_name: cc-backoffice-work
command: python3 backendworkproj/manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "7350:8000"
depends_on:
- db
EXECUTION:~/apps/django.work./extraction$ docker-compose up --no-deps -d web
Creating network "extraction_default" with the default driver
Creating cc-backoffice-work ...
Creating cc-backoffice-work ... done
EXECUTION:~/apps/django.work/extraction$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
39185f36941a apostx-cc-backoffice-work "python3 backendwo..." 8 seconds ago Up 7 seconds 0.0.0.0:7350->8000/tcp cc-backoffice-work
dede5cb1966a jarkt/docker-remote-api "/bin/sh -c 'socat..." 2 days ago Up 2 days 0.0.0.0:3080->2375/tcp dock_user_display_remote
But, when I work with the second directory to compile and start a different image, some strange things start to happen:
Again, more details are below:
CONS DIRECTORY: ~/apps/django.cons/extraction/docker-compose.yml
version: '3'
services:
db:
image: postgres
container_name: postgres-cons
web:
build: .
image: apostx-cc-backoffice-cons
container_name: cc-backoffice-cons
command: python3 backendworkproj/manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "7450:8000"
depends_on:
- db
EXECUTION:~/apps/django.cons/extraction$ docker-compose up --no-deps -d web
Recreating cc-backoffice-work ...
Recreating cc-backoffice-work
Recreating cc-backoffice-work ... done
EXECUTION:~/apps/django.cons/extraction$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f942f84e567a apostx-cc-backoffice-cons "python3 backendwo..." 7 seconds ago Up 6 seconds 0.0.0.0:7450->8000/tcp cc-backoffice-cons
dede5cb1966a jarkt/docker-remote-api "/bin/sh -c 'socat..." 2 days ago Up 2 days 0.0.0.0:3080->2375/tcp dock_user_display_remote
Question
Why is the first container being supplanted when I start the second one? If it is due to some kind of caching issue, how can one re-initialize/clean/clear out the cache before running docker-compose for a second time? Am I missing something here?
TIA
Update - I did the following:
got rid of old containers by using "docker container rm -f "
-
started the "work" (i.e. development) container
execute:~/apps/django.work.ccbo.thecontractors.club/extraction$ docker-compose --verbose up --no-deps -d web >& the_results_are_here
execute:~/apps/django.work.ccbo.thecontractors.club/extraction$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
61d2e9ccbc28 apostx-cc-backoffice-work "python3 backendwo..." 4 seconds ago Up 4 seconds 0.0.0.0:7350->8000/tcp work-cc-backoffice
dede5cb1966a jarkt/docker-remote-api "/bin/sh -c 'socat..." 3 days ago Up 3 days 0.0.0.0:3080->2375/tcp dock_user_display_remote
9b4b8b462fcb wmaker-test-officework "catalina.sh run" 11 days ago Up 11 days 0.0.0.0:7700->8080/tcp testBackOfficeWork.2017.10.30.04.20.01
ad5fd0592a07 wmaker-locl-officework "catalina.sh run" 11 days ago Up 11 days 0.0.0.0:7500->8080/tcp loclBackOfficeWork.2017.10.30.04.20.01
7bc9d7f94828 wmaker-cons-officework "catalina.sh run" 11 days ago Up 11 days 0.0.0.0:7600->8080/tcp consBackOfficeWork.2017.10.30.04.20.01
seeing that it looks OK, started the container for "cons" (consolidation)
execute:~/apps/django.cons.ccbo.thecontractors.club/extraction$ docker-compose --verbose up --no-deps -d web >& the_results_are_here
execute:~/apps/django.cons.ccbo.thecontractors.club/extraction$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0fb24fc45877 apostx-cc-backoffice-cons "python backendwor..." 5 seconds ago Up 4 seconds 0.0.0.0:7450->8010/tcp cons-cc-backoffices
dede5cb1966a jarkt/docker-remote-api "/bin/sh -c 'socat..." 3 days ago Up 3 days 0.0.0.0:3080->2375/tcp dock_user_display_remote
9b4b8b462fcb wmaker-test-officework "catalina.sh run" 11 days ago Up 11 days 0.0.0.0:7700->8080/tcp testBackOfficeWork.2017.10.30.04.20.01
ad5fd0592a07 wmaker-locl-officework "catalina.sh run" 11 days ago Up 11 days 0.0.0.0:7500->8080/tcp loclBackOfficeWork.2017.10.30.04.20.01
7bc9d7f94828 wmaker-cons-officework "catalina.sh run" 11 days ago Up 11 days 0.0.0.0:7600->8080/tcp consBackOfficeWork.2017.10.30.04.20.01
Again, the name: work-cc-backoffice has been supplanted by name: cons-cc-backoffices - work-cc-backoffice is totally gone now.
-
Looked at the file the_results_are_here (in the second run) to see if anything can be found
[... snip ...]
compose.cli.command.get_client: docker-compose version 1.17.1, build 6d101fb
docker-py version: 2.5.1
CPython version: 2.7.13
OpenSSL version: OpenSSL 1.0.1t 3 May 2016
compose.cli.command.get_client: Docker base_url: http+docker://localunixsocket
compose.cli.command.get_client: Docker version: KernelVersion=4.4.0-72-generic, Arch=amd64, BuildTime=2017-09-26T22:40:56.000000000+00:00, ApiVersion=1.32, Version=17.09.0-ce, MinAPIVersion=1.12, GitCommit=afdb6d4, Os=linux, GoVersion=go1.8.3
compose.cli.verbose_proxy.proxy_callable: docker info <- ()
compose.cli.verbose_proxy.proxy_callable: docker info -> {u'Architecture': u'x86_64',
[... snip ...]
compose.cli.verbose_proxy.proxy_callable: docker inspect_network <- (u'extraction_default')
compose.cli.verbose_proxy.proxy_callable: docker inspect_network -> {u'Attachable': True,
u'ConfigFrom': {u'Network': u''},
u'ConfigOnly': False,
u'Containers': {u'61d2e9ccbc28bb2aba918dc24b5f19a3f68a06b9502ec1b98e83dd947d75d1be': {u'EndpointID': u'e19696ccf258a6cdcfcce41d91d5b3ebcb5fffbce4257e3480ced48a3d7dcc5c',
u'IPv4Address': u'172.20.0.2/16',
u'IPv6Address': u'',
u'MacAddress': u'02:42:ac:14:00:02',
u'Name': u'work-cc-backoffice'}},
u'Created': u'2017-11-10T09:56:22.709914332Z',
u'Driver': u'bridge',
...
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=False, filters={u'label': [u'com.docker.compose.project=extraction', u'com.docker.compose.oneoff=False']})
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 1 items)
compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- (u'61d2e9ccbc28bb2aba918dc24b5f19a3f68a06b9502ec1b98e83dd947d75d1be')
compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {u'AppArmorProfile': u'docker-default',
u'Args': [u'backendworkproj/manage.py', u'runserver', u'0.0.0.0:8000'],
u'Config': {u'AttachStderr': False,
u'AttachStdin': False,
u'AttachStdout': False,
u'Cmd': [u'python3',
u'backendworkproj/manage.py',
u'runserver',
u'0.0.0.0:8000'],
u'Domainname': u'',
...
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={u'label': [u'com.docker.compose.project=extraction', u'com.docker.compose.service=web', u'com.docker.compose.oneoff=False']})
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 1 items)
compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- (u'61d2e9ccbc28bb2aba918dc24b5f19a3f68a06b9502ec1b98e83dd947d75d1be')
compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {u'AppArmorProfile': u'docker-default',
To me, it looks like the program is trying to do some initialization
by looking for a container that is already up and running(?) See pic.
below. How can one change this behavior
Answer from #mikeyjk resolved the issue.
No worries. I wonder if you give each service a unique name, re run
composer build, whether the issue still occurs. I'll try and replicate
it today if no-one can work it out

Resources