I used docker commit to commit my work on a Jupiter notebook on docker, but my computer crashed. When I try to run the docker container, I can't open the notebook at the time of latest commit.
The following bash commands yield:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7f1b4d6a811f iess:latest "/bin/su -l pycbc -c…" 5 minutes ago Exited (1) 5 minutes ago iess
0dcd955ad0b6 4028090df24a "/bin/su -l pycbc -c…" 14 minutes ago Exited (1) 11 minutes ago vibrant_minsky
d7d76573d511 4028090df24a "/bin/su -l pycbc -c…" 2 days ago Up 32 minutes 8888/tcp relaxed_cartwright
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
iess latest 6e07932643eb 9 hours ago 4.6GB
<none> <none> dd24d1257a5c 10 hours ago 4.6GB
<none> <none> 0bdaeb277ab9 10 hours ago 4.6GB
<none> <none> 911e848f8167 12 hours ago 4.6GB
<none> <none> de16c7fce855 20 hours ago 4.6GB
<none> <none> 147ed70ecf70 21 hours ago 4.6GB
<none> <none> 792f3f87b8ee 21 hours ago 4.6GB
<none> <none> 79cbcc4abc27 21 hours ago 4.6GB
<none> <none> 9abe343a42b1 21 hours ago 4.6GB
<none> <none> aea2324b9902 44 hours ago 4.6GB
<none> <none> 760e78217518 2 days ago 4.6GB
I am not a docker expert (very very very newbie), but I wanted to open start the last container on the list (d7d76573d511), with the image on top of the list (iess:latest created 9 hours ago).
If you want to use the new image: iess latest 6e07932643eb you need to stop the container d7d76573d51 they are both trying to use the same port which is causing the new one to crash. To stop the container use docker stop <CONTAINER ID>
Related
I am using the Kubernetes(v1.23.13) with the container and Flannel CNI. The Kubernetes cluster created on ubuntu (v 18) VM(vmware esxi) and windows server running on another VM. I follow the link below to add the windows(windows server 2019) node to the cluster. Windows node added the cluster. But the windows kube-proxy and demonset pod deployment has failed.
Link https://web.archive.org/web/20220530090758/https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/
Error: Normal Created (x5 over ) kubelet Created container kube-proxy
Normal Pulled (x5 over ) kubelet Container image "sigwindowstools/kube-proxy:v1.23.13-nanoserver" already present on machine
Warning Failed kubelet Error: failed to create containerd task: hcsshim::CreateComputeSystem kube-proxy: The directory name is invalid.
(extra info: {"Owner":"containerd-shim-runhcs-v1.exe","SchemaVersion":{"Major":2,"Minor":1},"Container":{"GuestOs":{"HostName":"kube-proxy-windows-hq7bb"},"Storage":{"Layers":[{"Id":"e30f10e1-6696-5df6-af3f-156a372bce4e","Path":"C:\ProgramData\containerd\root\io.containerd.snapshotter.v1.windows\snapshots\19"},{"Id":"8aa59a8b-78d3-5efe-a3d9-660bd52fd6ce","Path":"C:\ProgramData\containerd\root\io.containerd.snapshotter.v1.windows\snapshots\18"},{"Id":"f222f973-9869-5b65-a546-cb8ae78a32b9","Path":"C:\ProgramData\containerd\root\io.containerd.snapshotter.v1.windows\snapshots\17"},{"Id":"133385ae-6df6-509b-b342-bc46338b3df4","Path":"C:\ProgramData\containerd\root\io.containerd.snapshotter.v1.windows\snapshots\16"},{"Id":"f6f9524c-e3f0-5be2-978d-7e09e0b21299","Path":"C:\ProgramData\containerd\root\io.containerd.snapshotter.v1.windows\snapshots\15"},{"Id":"0d9d58e6-47b6-5091-a552-7cc2027ca06f","Path":"C:\ProgramData\containerd\root\io.containerd.snapshotter.v1.windows\snapshots\14"},{"Id":"6715ca06-295b-5fba-9224-795ca5af71b9","Path":"C:\ProgramData\containerd\root\io.containerd.snapshotter.v1.windows\snapshots\13"},{"Id":"75e64a3b-69a5-52cf-b39f-ee05718eb1e2","Path":"C:\ProgramData\containerd\root\io.containerd.snapshotter.v1.windows\snapshots\12"},{"Id":"8698c4b4-b092-57c6-b1eb-0a7ca14fcf4e","Path":"C:\ProgramData\containerd\root\io.containerd.snapshotter.v1.windows\snapshots\11"},{"Id":"7c9a6fb7-2ca8-5ef7-bbfe-cabbff23cfa4","Path":"C:\ProgramData\containerd\root\io.containerd.snapshotter.v1.windows\snapshots\10"},{"Id":"a10d4ad8-f2b1-5fd6-993f-7aa642762865","Path":"C:\ProgramData\containerd\root\io.containerd.snapshotter.v1.windows\snapshots\9"}],"Path":"\\?\Volume{64336318-a64f-436e-869c-55f9f8e4ea62}\"},"MappedDirectories":[{"HostPath":"c:\","ContainerPath":"c:\host"},{"HostPath":"c:\var\lib\kubelet\pods\1cd0c333-3cd0-4c90-9d22-884ea73e8b69\containers\kube-proxy\0e58a001","ContainerPath":"c:\dev\termination-log"},{"HostPath":"c:\var\lib\kubelet\pods\1cd0c333-3cd0-4c90-9d22-884ea73e8b69\volumes\kubernetes.io~configmap\kube-proxy","ContainerPath":"c:\var\lib\kube-proxy","ReadOnly":true},{"HostPath":"c:\var\lib\kubelet\pods\1cd0c333-3cd0-4c90-9d22-884ea73e8b69\volumes\kubernetes.io~configmap\kube-proxy-windows","ContainerPath":"c:\var\lib\kube-proxy-windows","ReadOnly":true},{"HostPath":"c:\var\lib\kubelet\pods\1cd0c333-3cd0-4c90-9d22-884ea73e8b69\volumes\kubernetes.io~projected\kube-api-access-4zs46","ContainerPath":"c:\var\run\secrets\kubernetes.io\serviceaccount","ReadOnly":true},{"HostPath":"c:\var\lib\kubelet\pods\1cd0c333-3cd0-4c90-9d22-884ea73e8b69\etc-hosts","ContainerPath":"C:\Windows\System32\drivers\etc\hosts"}],"MappedPipes":[{"ContainerPipeName":"rancher_wins","HostPath":"\\.\pipe\rancher_wins"}],"Networking":{"Namespace":"4a4d0354-251a-4750-8251-51ae42707db2"}},"ShouldTerminateOnLastHandleClosed":true}): unknown
Warning BackOff (x23 over ) kubelet Back-off restarting failed container
kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-64897985d-2mkd5 1/1 Running 0 19h
kube-system coredns-64897985d-qhhbz 1/1 Running 0 19h
kube-system etcd-scspa2658542001 1/1 Running 2 19h
kube-system kube-apiserver-scspa2658542001 1/1 Running 8 (3h4m ago) 19h
kube-system kube-controller-manager-scspa2658542001 1/1 Running 54 (126m ago) 19h
kube-system kube-flannel-ds-hjw8s 1/1 Running 14 (18h ago) 19h
kube-system kube-flannel-ds-windows-amd64-xfhjl 0/1 ImagePullBackOff 0 29m
kube-system kube-proxy-windows-hq7bb 0/1 CrashLoopBackOff 10 (<invalid> ago) 29m
kube-system kube-proxy-wx2x9 1/1 Running 0 19h
kube-system kube-scheduler-scspa2658542001 1/1 Running 92 (153m ago) 19h
From this issue, it seems windows nodes with flannel has issues they have solved with different work arounds,
As mentioned in the issue they have made a guide to work windows properly, Follow this doc with the installation guide and requirements.
Attaching troubleshooting blog and issue for crashloop backoff.
I had a similar error failed to create containerd task: hcsshim::CreateComputeSystem with flannel on k8s v1.24. The cause was that Windows OS patches had not been applied. You must have applied the patch related to KB4489899.
https://github.com/kubernetes-sigs/sig-windows-tools/blob/master/guides/guide-for-adding-windows-node.md#before-you-begin
I am taking a course in google cloud on launching a kubernetes engine cluster. I received this when running through twice.
What is the fix for CrashLoopBackOff? I have not been able to locate.
(venv) student_04_9b8cb56b5006#cloudshell:~/cloud-vision/python/awwvision/cloud-vision/python/awwvision (qwiklabs-gcp-00-128898864713)$ kubectl get pods
W0619 19:58:53.278025 3544 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead.
To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
NAME READY STATUS RESTARTS AGE
awwvision-webapp-55f5dbb8c7-mdtnq 0/1 CrashLoopBackOff 9 (2m45s ago) 24m
awwvision-worker-79c846b86d-f9mvp 0/1 CrashLoopBackOff 9 (2m4s ago) 23m
awwvision-worker-79c846b86d-lhnt8 0/1 CrashLoopBackOff 9 (2m25s ago) 23m
awwvision-worker-79c846b86d-t79zc 0/1 CrashLoopBackOff 9 (2m45s ago) 23m
redis-master-6c59fc54c-ldk8t 1/1 Running 0 25m
I want save a docker image as a tar file, but failed
I got images:
REPOSITORY TAG IMAGE ID CREATED SIZE
golint latest 075f4c382720 28 minutes ago 788MB
<none> <none> bb0aaa77781c About an hour ago 781MB
<none> <none> 2c8079482c47 2 hours ago 781MB
<none> <none> 911e38aa48e8 2 hours ago 781MB
<none> <none> 9c40a6f74947 2 hours ago 781MB
<none> <none> 3c31ac0f697b 3 hours ago 781MB
<none> <none> 7e04f58c859d 3 hours ago 256MB
<none> <none> 5c2c67944e4c 4 hours ago 760MB
golang 1.10 6fd1f7edb6ab 2 months ago 760MB
golang 1.9-alpine b0260be938c6 8 months ago 240MB
The first image which is I want save
sudo docker save -o golint.tar 075f4c382720
the message is :
open .docker_temp_186536267: read-only file system
where can I find the file ".docker_temp_186536267"?
Can I change the rights of the file?
what should I do
I'm trying to create multiple containers using "Docker swarm" on Windows Server 2016 my service has been created and replicated but it's not stable it's running with an error.
PS C:\Users\tmman\Desktop\stackdeploy> docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
xw6kqqu7o4ad demo_db replicated 1/1 microsoft/mssql-server-windows-express:latest
kkrpxwiytax9 demo_web replicated 1/1 microsoft/iis:latest *:80->80/tcp
PS C:\Users\tmman\Desktop\stackdeploy> docker service ps demo_web demo_db
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
1s4ybqny71sd demo_web.1 microsoft/iis:latest DELEI4127 Running Starting 2 seconds ago
uohf736ux1ne \_ demo_web.1 microsoft/iis:latest DELEI4127 Shutdown Failed 7 seconds ago "task: non-zero exit (21479438…"
owpguwtbpdxc \_ demo_web.1 microsoft/iis:latest DELEI4127 Shutdown Failed 16 seconds ago "starting container failed: co…"
dg54mihkflbx \_ demo_web.1 microsoft/iis:latest DELEI4127 Shutdown Failed 25 seconds ago "task: non-zero exit (21475000…"
7enznbiqjfp5 \_ demo_web.1 microsoft/iis:latest DELEI4127 Shutdown Failed 37 seconds ago "starting container failed: co…"
o541pdex9s3p demo_db.1 microsoft/mssql-server-windows-express:latest DELEI4127 Running Running 11 minutes ago
PS C:\Users\tmman\Desktop\stackdeploy> docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
017afe0a6211 microsoft/iis:latest "C:\\ServiceMonitor.e…" 18 seconds ago Up 10 seconds 80/tcp demo_web.1.1s4ybqny71sds3i9h47i59g4s
5889ac4ef8d2 microsoft/iis:latest "C:\\ServiceMonitor.e…" 27 seconds ago Exited (2147943855) 19 seconds ago demo_web.1.uohf736ux1neaz5u0p1d73jx4
dce80549e789 microsoft/iis:latest "C:\\ServiceMonitor.e…" 37 seconds ago Created 80/tcp demo_web.1.owpguwtbpdxc50em7a3rc8m05
92721722311c microsoft/iis:latest "C:\\ServiceMonitor.e…" 48 seconds ago Exited (2147500037) 38 seconds ago demo_web.1.dg54mihkflbxt2j0wd6tv8qlt
166d29256771 microsoft/iis:latest "C:\\ServiceMonitor.e…" 11 minutes ago Created 80/tcp demo_web.1.7enznbiqjfp5hdvjbgvnj1q1v
fbc3deb1930e microsoft/mssql-server-windows-express:latest "powershell -Command…" 11 minutes ago Up 11 minutes demo_db.1.o541pdex9s3pmaaty8xzkezpx
Error:
PS C:\Users\tmman\Desktop\stackdeploy> docker service logs demo_web
demo_web.1.1vwqd9xgvfd3#DELEI4127 |
demo_web.1.1vwqd9xgvfd3#DELEI4127 | Service 'w3svc' has been stopped
demo_web.1.1vwqd9xgvfd3#DELEI4127 |
demo_web.1.8vs509jzpwb9#DELEI4127 |
demo_web.1.8vs509jzpwb9#DELEI4127 | Service 'w3svc' has been stopped
demo_web.1.8vs509jzpwb9#DELEI4127 |
demo_web.1.8vs509jzpwb9#DELEI4127 | Failed to update IIS configuration
demo_web.1.1vwqd9xgvfd3#DELEI4127 | Failed to update IIS configuration
demo_web.1.z7yhqf1wqqiu#DELEI4127 |
demo_web.1.z7yhqf1wqqiu#DELEI4127 | Service 'w3svc' has been stopped
demo_web.1.z7yhqf1wqqiu#DELEI4127 |
demo_web.1.pt4du3jr20nj#DELEI4127 |
demo_web.1.pt4du3jr20nj#DELEI4127 | Service 'w3svc' has been stopped
demo_web.1.pt4du3jr20nj#DELEI4127 |
demo_web.1.z7yhqf1wqqiu#DELEI4127 | Failed to update IIS configuration
demo_web.1.pt4du3jr20nj#DELEI4127 | Failed to update IIS configuration
PS C:\Users\tmman\Desktop\stackdeploy> docker info
Containers: 6
Running: 2
Paused: 0
Stopped: 4
Images: 3
Server Version: 18.09.2
Storage Driver: windowsfilter
Windows:
Logging Driver: json-file
Plugins:
Volume: local
Network: ics l2bridge l2tunnel nat null overlay transparent
Log: awslogs etwlogs fluentd gelf json-file local logentries splunk syslog
Swarm: active
NodeID: 1oef3sisa3el3q46tz7aj78eu
Is Manager: true
ClusterID: k4l6x9b42bg7bfnf630g3g2h1
Managers: 1
Nodes: 1
Default Address Pool: 10.0.0.0/8
SubnetSize: 24
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 10
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Autolock Managers: false
Root Rotation In Progress: false
Node Address: 10.49.2.50
Manager Addresses:
10.49.2.50:2377
Default Isolation: process
Kernel Version: 10.0 14393 (14393.2848.amd64fre.rs1_release.190305-1856)
Operating System: Windows Server 2016 Standard Version 1607 (OS Build 14393.2848)
OSType: windows
Architecture: x86_64
CPUs: 8
Total Memory: 7.999GiB
Name: DELEI4127
ID: 5BCA:25XO:U3TH:JYXD:BULV:MWXO:UWYF:APIZ:AVCF:R2QO:6KFL:A5SE
Docker Root Dir: C:\ProgramData\docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
my Dockerfile :
FROM microsoft/iis:nanoserver
WORKDIR /inetpub/wwwroot
RUN powershell -Command `
Add-WindowsFeature Web-Server;
Invoke-WebRequest -UseBasicParsing -Uri "https://dotnetbinaries.blob.core.windows.net/servicemonitor/2.0.1.6/ServiceMonitor.exe" -OutFile "C:\ServiceMonitor.exe"
EXPOSE 80
ENTRYPOINT ["C:\\ServiceMonitor.exe", "w3svc"]
COPY index.html /index.html
my YML.file:
version: "3"
services:
db:
image: microsoft/mssql-server-windows-express
networks:
- cpxnet
deploy:
environment:
- SA_PASSWORD=Abcd1234
- ACCEPT_EULA=Y
web:
image: microsoft/iis:latest
networks:
- cpxnet
deploy:
resources:
limits:
memory: 50M
ports:
- "80:80"
depends_on:
- db
networks:
cpxnet:
I got some solutions from this website: https://github.com/Microsoft/aspnet-docker/issues/64 but didn't help with my error
Thank you in advance for your help!!
Note!!!: i'm a beginner
I faced similar issue n had almost similar dockerfile.
To troubleshoot I changed the entrypoint to continuous ping to local host. So that I could login to container n check the issue with w3svc.
I found strange multiple events in event viewer (need to view in powershell) that w3svc is started successfully. There were no stop event. But I concluded that something wrong with my service.
I further checked n found that app pool is in "OnDemand" state instead of "always running". I set the apppool property to always running in my dockerfile. Post that switched entrypoint to original value "service monitor.exe w3svc" n it made my container stable.
Note - I m not using default iis image where service monitor already set as entrypoint.
I am trying to start 2 separate containers using the docker-compose command based on 2 different images.
One image (work) is based on code worked on in "development". A second image (cons) image is created by code that is currently at the "consolidation" level.
When starting the first container, all seems to go OK.
Details of above image are here:
WORK DIRECTORY: ~/apps/django.work/extraction/docker-compose.yml
version: '3'
services:
db:
image: postgres
container_name: postgres-work
web:
build: .
image: apostx-cc-backoffice-work
container_name: cc-backoffice-work
command: python3 backendworkproj/manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "7350:8000"
depends_on:
- db
EXECUTION:~/apps/django.work./extraction$ docker-compose up --no-deps -d web
Creating network "extraction_default" with the default driver
Creating cc-backoffice-work ...
Creating cc-backoffice-work ... done
EXECUTION:~/apps/django.work/extraction$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
39185f36941a apostx-cc-backoffice-work "python3 backendwo..." 8 seconds ago Up 7 seconds 0.0.0.0:7350->8000/tcp cc-backoffice-work
dede5cb1966a jarkt/docker-remote-api "/bin/sh -c 'socat..." 2 days ago Up 2 days 0.0.0.0:3080->2375/tcp dock_user_display_remote
But, when I work with the second directory to compile and start a different image, some strange things start to happen:
Again, more details are below:
CONS DIRECTORY: ~/apps/django.cons/extraction/docker-compose.yml
version: '3'
services:
db:
image: postgres
container_name: postgres-cons
web:
build: .
image: apostx-cc-backoffice-cons
container_name: cc-backoffice-cons
command: python3 backendworkproj/manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "7450:8000"
depends_on:
- db
EXECUTION:~/apps/django.cons/extraction$ docker-compose up --no-deps -d web
Recreating cc-backoffice-work ...
Recreating cc-backoffice-work
Recreating cc-backoffice-work ... done
EXECUTION:~/apps/django.cons/extraction$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f942f84e567a apostx-cc-backoffice-cons "python3 backendwo..." 7 seconds ago Up 6 seconds 0.0.0.0:7450->8000/tcp cc-backoffice-cons
dede5cb1966a jarkt/docker-remote-api "/bin/sh -c 'socat..." 2 days ago Up 2 days 0.0.0.0:3080->2375/tcp dock_user_display_remote
Question
Why is the first container being supplanted when I start the second one? If it is due to some kind of caching issue, how can one re-initialize/clean/clear out the cache before running docker-compose for a second time? Am I missing something here?
TIA
Update - I did the following:
got rid of old containers by using "docker container rm -f "
-
started the "work" (i.e. development) container
execute:~/apps/django.work.ccbo.thecontractors.club/extraction$ docker-compose --verbose up --no-deps -d web >& the_results_are_here
execute:~/apps/django.work.ccbo.thecontractors.club/extraction$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
61d2e9ccbc28 apostx-cc-backoffice-work "python3 backendwo..." 4 seconds ago Up 4 seconds 0.0.0.0:7350->8000/tcp work-cc-backoffice
dede5cb1966a jarkt/docker-remote-api "/bin/sh -c 'socat..." 3 days ago Up 3 days 0.0.0.0:3080->2375/tcp dock_user_display_remote
9b4b8b462fcb wmaker-test-officework "catalina.sh run" 11 days ago Up 11 days 0.0.0.0:7700->8080/tcp testBackOfficeWork.2017.10.30.04.20.01
ad5fd0592a07 wmaker-locl-officework "catalina.sh run" 11 days ago Up 11 days 0.0.0.0:7500->8080/tcp loclBackOfficeWork.2017.10.30.04.20.01
7bc9d7f94828 wmaker-cons-officework "catalina.sh run" 11 days ago Up 11 days 0.0.0.0:7600->8080/tcp consBackOfficeWork.2017.10.30.04.20.01
seeing that it looks OK, started the container for "cons" (consolidation)
execute:~/apps/django.cons.ccbo.thecontractors.club/extraction$ docker-compose --verbose up --no-deps -d web >& the_results_are_here
execute:~/apps/django.cons.ccbo.thecontractors.club/extraction$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0fb24fc45877 apostx-cc-backoffice-cons "python backendwor..." 5 seconds ago Up 4 seconds 0.0.0.0:7450->8010/tcp cons-cc-backoffices
dede5cb1966a jarkt/docker-remote-api "/bin/sh -c 'socat..." 3 days ago Up 3 days 0.0.0.0:3080->2375/tcp dock_user_display_remote
9b4b8b462fcb wmaker-test-officework "catalina.sh run" 11 days ago Up 11 days 0.0.0.0:7700->8080/tcp testBackOfficeWork.2017.10.30.04.20.01
ad5fd0592a07 wmaker-locl-officework "catalina.sh run" 11 days ago Up 11 days 0.0.0.0:7500->8080/tcp loclBackOfficeWork.2017.10.30.04.20.01
7bc9d7f94828 wmaker-cons-officework "catalina.sh run" 11 days ago Up 11 days 0.0.0.0:7600->8080/tcp consBackOfficeWork.2017.10.30.04.20.01
Again, the name: work-cc-backoffice has been supplanted by name: cons-cc-backoffices - work-cc-backoffice is totally gone now.
-
Looked at the file the_results_are_here (in the second run) to see if anything can be found
[... snip ...]
compose.cli.command.get_client: docker-compose version 1.17.1, build 6d101fb
docker-py version: 2.5.1
CPython version: 2.7.13
OpenSSL version: OpenSSL 1.0.1t 3 May 2016
compose.cli.command.get_client: Docker base_url: http+docker://localunixsocket
compose.cli.command.get_client: Docker version: KernelVersion=4.4.0-72-generic, Arch=amd64, BuildTime=2017-09-26T22:40:56.000000000+00:00, ApiVersion=1.32, Version=17.09.0-ce, MinAPIVersion=1.12, GitCommit=afdb6d4, Os=linux, GoVersion=go1.8.3
compose.cli.verbose_proxy.proxy_callable: docker info <- ()
compose.cli.verbose_proxy.proxy_callable: docker info -> {u'Architecture': u'x86_64',
[... snip ...]
compose.cli.verbose_proxy.proxy_callable: docker inspect_network <- (u'extraction_default')
compose.cli.verbose_proxy.proxy_callable: docker inspect_network -> {u'Attachable': True,
u'ConfigFrom': {u'Network': u''},
u'ConfigOnly': False,
u'Containers': {u'61d2e9ccbc28bb2aba918dc24b5f19a3f68a06b9502ec1b98e83dd947d75d1be': {u'EndpointID': u'e19696ccf258a6cdcfcce41d91d5b3ebcb5fffbce4257e3480ced48a3d7dcc5c',
u'IPv4Address': u'172.20.0.2/16',
u'IPv6Address': u'',
u'MacAddress': u'02:42:ac:14:00:02',
u'Name': u'work-cc-backoffice'}},
u'Created': u'2017-11-10T09:56:22.709914332Z',
u'Driver': u'bridge',
...
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=False, filters={u'label': [u'com.docker.compose.project=extraction', u'com.docker.compose.oneoff=False']})
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 1 items)
compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- (u'61d2e9ccbc28bb2aba918dc24b5f19a3f68a06b9502ec1b98e83dd947d75d1be')
compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {u'AppArmorProfile': u'docker-default',
u'Args': [u'backendworkproj/manage.py', u'runserver', u'0.0.0.0:8000'],
u'Config': {u'AttachStderr': False,
u'AttachStdin': False,
u'AttachStdout': False,
u'Cmd': [u'python3',
u'backendworkproj/manage.py',
u'runserver',
u'0.0.0.0:8000'],
u'Domainname': u'',
...
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={u'label': [u'com.docker.compose.project=extraction', u'com.docker.compose.service=web', u'com.docker.compose.oneoff=False']})
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 1 items)
compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- (u'61d2e9ccbc28bb2aba918dc24b5f19a3f68a06b9502ec1b98e83dd947d75d1be')
compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {u'AppArmorProfile': u'docker-default',
To me, it looks like the program is trying to do some initialization
by looking for a container that is already up and running(?) See pic.
below. How can one change this behavior
Answer from #mikeyjk resolved the issue.
No worries. I wonder if you give each service a unique name, re run
composer build, whether the issue still occurs. I'll try and replicate
it today if no-one can work it out