How do you use a private registry with Docker? - windows

I followed the tutorial
https://docs.docker.com/get-started/part4/#deploy-the-app-on-the-swarm-manager
And created my own registry using
https://github.com/docker/docker-registry/blob/master/README.md#quick-start
https://docs.docker.com/registry/#basic-commands
https://blog.docker.com/2013/07/how-to-use-your-own-registry/
However it fails to deploy on the worker nodes with the error "No such image: 192.168.99.100". What is wrong?
docker run -d -p 5000:5000 --name registry registry:2
docker tag friendlyhello 192.168.99.100:5000/get-started:part2
docker push 192.168.99.100:5000/get-started # Get https://192.168.99.100:5000/v2/: http: server gave HTTP response to HTTPS client
docker tag friendlyhello localhost:5000/get-started:part2
docker push localhost:5000/get-started:part2
docker stack deploy -c docker-compose.yml getstartedlab
docker service ps getstartedlab_web
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
PORTS
o4nbsqccqlm4 getstartedlab_web.1 192.168.99.100:5000/get-started:part2 default Running Running 17 minutes ago
qcjtq3gqag9j \_ getstartedlab_web.1 192.168.99.100:5000/get-started:part2 myvm1 Shutdown Rejected 17 minutes ago "No such image: 192.168.99.100â?▌"
This is my docker-compose.yml file:
...
image: 192.168.99.100:5000/get-started:part2
...
I tried to use image: localhost:5000/get-started:part2 in the docker-compose.yml file also, but it gave the error No such image: localhost:5000.
docker stack rm getstartedlab
docker stack deploy -c docker-compose.yml getstartedlab
docker service ps getstartedlab_web
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
PORTS
k2cck1p7wpg1 getstartedlab_web.1 localhost:5000/get-started:part2 default Running Running 10 seconds ago
69km7zabgw6l \_ getstartedlab_web.1 localhost:5000/get-started:part2 myvm1 Shutdown Rejected 21 seconds ago "No such image: localhost:5000â?▌"
Windows 8.1, Docker version 18.03.0-ce, build 0520e24302

Related

Docker Issue : Cannot run the container (repository does not exist or may require 'docker login')

After I devise a Spring Boot project with the usage of MinIo, I tried to run it in Docker but I have an issue.
Here is my docker-compose.yaml file
version: '3.8'
services:
minio:
image: minio/minio:latest
container_name: minio
environment:
MINIO_ACCESS_KEY: "minioadmin"
MINIO_SECRET_KEY: "minioadmin"
volumes:
- ./data:/data
ports:
- 9000:9000
- 9001:9001
I firstly run this command docker-compose up -d.
Then I run docker ps -a to check if it is located in container. After that, I run this command docker run <container-id> (a07fdf1ef8c4), here is a message shown below.
Unable to find image 'a07fdf1ef8c4:latest' locally
docker: Error response from daemon: pull access denied for a07fdf1ef8c4, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
See 'docker run --help'.
I also run this option shown below nothing changed.
C:\Users\host\IdeaProjects\SpringBootMinio>docker run -p 9000:9000 9001:9001 minio/minio:latest
Unable to find image '9001:9001' locally
docker: Error response from daemon: pull access denied for 9001, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
See 'docker run --help'.
Even if I run the command docker login, I couldn't fix it.
How can I solve it out?
1st Error
docker run <container-id> - That is not how you run a container with Docker. When you run docker-compose up -d, it already starts the containers; in this case it's MinIO.
The docker run function requires an image name as the argument. So when you do docker run <container-id>, it tries to find an image with the container ID, which doesn't exist.
So when you do docker-compose up -d, it starts minio. You do not need to start it again.
2nd Error
When you run docker run -p 9000:9000 9001:9001 minio/minio:latest, you are basically saying that the image name is 9001:9001. But no such image exists. If you want to expose another port, just do docker run -p 9000:9000 -p 9001:9001 minio/minio:latest. For every single port you want to expose, just do -p and enter the port mapping.

How do I deploy a Docker app without publishing it?

How do I deploy a Docker app without publishing it to their hub? I don't want to create a username and password on their service (they just want to trap flies in their ecosystem), and I don't think I will use the swarm part of Docker. Besides that, it sounds very insecure to publish your closed-source code on a public repository! However I want to see how it works and want to learn the stack part, which depends on the swarm part. I followed their tutorial, but the app only deployed on the local default master node.
https://docs.docker.com/get-started/part4/#deploy-the-app-on-the-swarm-manager
docker-composer.yml
...
# replace username/repo:tag with your name and image details
image: friendlyhello
3 machines/nodes with 1 master node
C:\Temp\docker-tutorial>docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default * virtualbox Running tcp://192.168.99.100:2376 v18.03.1-ce
myvm1 - virtualbox Running tcp://192.168.99.101:2376 v18.03.1-ce
myvm2 - virtualbox Running tcp://192.168.99.102:2376 v18.03.1-ce
The app is deployed with 6 instances.
C:\Temp\docker-tutorial>docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
uvsxf1q7brhb getstartedlab_web replicated 6/6 friendlyhello:latest *:80->80/tcp
However the app only fell onto the default master node and none of the swarm nodes.
C:\Temp\docker-tutorial>docker service ps getstartedlab_web
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
PORTS
6jh1ua0wjyzi getstartedlab_web.1 friendlyhello:latest default Running Running about an hour ago
to14hu7g3rhz \_ getstartedlab_web.1 friendlyhello:latest myvm1 Shutdown Rejected about an hour ago "No such image: friendlyhello:"
ek91tcdj61nv \_ getstartedlab_web.1 friendlyhello:latest myvm1 Shutdown Rejected about an hour ago "No such image: friendlyhello:"
jwdvuf89a640 \_ getstartedlab_web.1 friendlyhello:latest myvm2 Shutdown Rejected about an hour ago "No such image: friendlyhello:"
xrp0rim67ipi getstartedlab_web.2 friendlyhello:latest default Running Running about an hour ago
tp008eoj2mpk getstartedlab_web.3 friendlyhello:latest default Running Running about an hour ago
w6wyk3nj53zv \_ getstartedlab_web.3 friendlyhello:latest myvm2 Shutdown Rejected about an hour ago "No such image: friendlyhello:"
7ts6aqianz7l \_ getstartedlab_web.3 friendlyhello:latest myvm1 Shutdown Rejected about an hour ago "No such image: friendlyhello:"
gjt1qks57rud \_ getstartedlab_web.3 friendlyhello:latest myvm1 Shutdown Rejected about an hour ago "No such image: friendlyhello:"
o05u4qwt12vq getstartedlab_web.4 friendlyhello:latest default Running Running about an hour ago
ifzmmy8ru443 \_ getstartedlab_web.4 friendlyhello:latest myvm1 Shutdown Rejected about an hour ago "No such image: friendlyhello:"
jnxn8gs3bte3 \_ getstartedlab_web.4 friendlyhello:latest myvm2 Shutdown Rejected about an hour ago "No such image: friendlyhello:"
xsooht9gpf01 \_ getstartedlab_web.4 friendlyhello:latest myvm2 Shutdown Rejected about an hour ago "No such image: friendlyhello:"
v23mjl8n3yyd getstartedlab_web.5 friendlyhello:latest default Running Running about an hour ago
meocennltdph getstartedlab_web.6 friendlyhello:latest default Running Running about an hour ago
3t78bpswwuyw \_ getstartedlab_web.6 friendlyhello:latest myvm2 Shutdown Rejected about an hour ago "No such image: friendlyhello:"
y3ih3md932qo \_ getstartedlab_web.6 friendlyhello:latest myvm2 Shutdown Rejected about an hour ago "No such image: friendlyhello:"
sqsngkq1440a \_ getstartedlab_web.6 friendlyhello:latest myvm1 Shutdown Rejected about an hour ago "No such image: friendlyhello:"
Docker version 18.03.0-ce, build 0520e24302, Windows 8.1
I tried to follow
https://github.com/docker/docker-registry/blob/master/README.md#quick-start
https://docs.docker.com/registry/#basic-commands
https://blog.docker.com/2013/07/how-to-use-your-own-registry/
I set this line in docker-compose.yml
image: 192.168.99.100:5000/get-started:part2
But after I ran docker stack deploy it still failed!
C:\Temp\docker-tutorial>docker stack deploy -c docker-compose.yml getstartedlab
Creating network getstartedlab_webnet
Creating service getstartedlab_web
C:\Temp\docker-tutorial>docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
jjr7cuqy2i54 getstartedlab_web replicated 0/6 192.168.99.100:5000/get-started:part2 *:80->80/tcp
C:\Temp\docker-tutorial>docker service ps getstartedlab_web
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
PORTS
bsx3slkj8pbr getstartedlab_web.1 192.168.99.100:5000/get-started:part2 myvm1 Ready Rejected 3 seconds ago "No such image: 192.168.99.100"
cusqg0p35cwp \_ getstartedlab_web.1 192.168.99.100:5000/get-started:part2 default Shutdown Rejected 8 seconds ago "No such image: 192.168.99.100"
...
The image is in 'localhost' but not 192.168.99.100.
C:\Temp\docker-tutorial>docker pull localhost:5000/get-started:part2
part2: Pulling from get-started
Digest: sha256:fedc2e7c01a45dab371cf4e01b7f8854482b33564c52d2c725f52f787f91dbcb
Status: Image is up to date for localhost:5000/get-started:part2
C:\Temp\docker-tutorial>docker pull 192.168.99.100:5000/get-started:part2
Error response from daemon: Get https://192.168.99.100:5000/v2/: http: server gave HTTP response to HTTPS client
localhost:5000 refuses to connect in the browser. I also tried localhost:5000/get-started:part2 as the image name, but that also failed.
You can host your own docker container registry or use private container registries from many cloud-providers with your custom auth. Few options:
AWS ECR / Amazon Elastic Container Registry: https://aws.amazon.com/ecr/
Azure Container Registry: https://azure.microsoft.com/en-us/services/container-registry/
Codefresh private docker registries: https://codefresh.io/
Artifactory: https://www.jfrog.com/confluence/display/RTF/Docker+Registry
If you want to have complete control, you can alternatively host your own Docker Registry as well:
https://github.com/docker/docker-registry/blob/master/README.md
https://blog.docker.com/2013/07/how-to-use-your-own-registry/
Once you setup your registry you can just simply authenticate with docker login and then manage your images with docker push/pull as usual.

creation of container using docker-compose deletes another container that is already running

I am trying to start 2 separate containers using the docker-compose command based on 2 different images.
One image (work) is based on code worked on in "development". A second image (cons) image is created by code that is currently at the "consolidation" level.
When starting the first container, all seems to go OK.
Details of above image are here:
WORK DIRECTORY: ~/apps/django.work/extraction/docker-compose.yml
version: '3'
services:
db:
image: postgres
container_name: postgres-work
web:
build: .
image: apostx-cc-backoffice-work
container_name: cc-backoffice-work
command: python3 backendworkproj/manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "7350:8000"
depends_on:
- db
EXECUTION:~/apps/django.work./extraction$ docker-compose up --no-deps -d web
Creating network "extraction_default" with the default driver
Creating cc-backoffice-work ...
Creating cc-backoffice-work ... done
EXECUTION:~/apps/django.work/extraction$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
39185f36941a apostx-cc-backoffice-work "python3 backendwo..." 8 seconds ago Up 7 seconds 0.0.0.0:7350->8000/tcp cc-backoffice-work
dede5cb1966a jarkt/docker-remote-api "/bin/sh -c 'socat..." 2 days ago Up 2 days 0.0.0.0:3080->2375/tcp dock_user_display_remote
But, when I work with the second directory to compile and start a different image, some strange things start to happen:
Again, more details are below:
CONS DIRECTORY: ~/apps/django.cons/extraction/docker-compose.yml
version: '3'
services:
db:
image: postgres
container_name: postgres-cons
web:
build: .
image: apostx-cc-backoffice-cons
container_name: cc-backoffice-cons
command: python3 backendworkproj/manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "7450:8000"
depends_on:
- db
EXECUTION:~/apps/django.cons/extraction$ docker-compose up --no-deps -d web
Recreating cc-backoffice-work ...
Recreating cc-backoffice-work
Recreating cc-backoffice-work ... done
EXECUTION:~/apps/django.cons/extraction$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f942f84e567a apostx-cc-backoffice-cons "python3 backendwo..." 7 seconds ago Up 6 seconds 0.0.0.0:7450->8000/tcp cc-backoffice-cons
dede5cb1966a jarkt/docker-remote-api "/bin/sh -c 'socat..." 2 days ago Up 2 days 0.0.0.0:3080->2375/tcp dock_user_display_remote
Question
Why is the first container being supplanted when I start the second one? If it is due to some kind of caching issue, how can one re-initialize/clean/clear out the cache before running docker-compose for a second time? Am I missing something here?
TIA
Update - I did the following:
got rid of old containers by using "docker container rm -f "
-
started the "work" (i.e. development) container
execute:~/apps/django.work.ccbo.thecontractors.club/extraction$ docker-compose --verbose up --no-deps -d web >& the_results_are_here
execute:~/apps/django.work.ccbo.thecontractors.club/extraction$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
61d2e9ccbc28 apostx-cc-backoffice-work "python3 backendwo..." 4 seconds ago Up 4 seconds 0.0.0.0:7350->8000/tcp work-cc-backoffice
dede5cb1966a jarkt/docker-remote-api "/bin/sh -c 'socat..." 3 days ago Up 3 days 0.0.0.0:3080->2375/tcp dock_user_display_remote
9b4b8b462fcb wmaker-test-officework "catalina.sh run" 11 days ago Up 11 days 0.0.0.0:7700->8080/tcp testBackOfficeWork.2017.10.30.04.20.01
ad5fd0592a07 wmaker-locl-officework "catalina.sh run" 11 days ago Up 11 days 0.0.0.0:7500->8080/tcp loclBackOfficeWork.2017.10.30.04.20.01
7bc9d7f94828 wmaker-cons-officework "catalina.sh run" 11 days ago Up 11 days 0.0.0.0:7600->8080/tcp consBackOfficeWork.2017.10.30.04.20.01
seeing that it looks OK, started the container for "cons" (consolidation)
execute:~/apps/django.cons.ccbo.thecontractors.club/extraction$ docker-compose --verbose up --no-deps -d web >& the_results_are_here
execute:~/apps/django.cons.ccbo.thecontractors.club/extraction$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0fb24fc45877 apostx-cc-backoffice-cons "python backendwor..." 5 seconds ago Up 4 seconds 0.0.0.0:7450->8010/tcp cons-cc-backoffices
dede5cb1966a jarkt/docker-remote-api "/bin/sh -c 'socat..." 3 days ago Up 3 days 0.0.0.0:3080->2375/tcp dock_user_display_remote
9b4b8b462fcb wmaker-test-officework "catalina.sh run" 11 days ago Up 11 days 0.0.0.0:7700->8080/tcp testBackOfficeWork.2017.10.30.04.20.01
ad5fd0592a07 wmaker-locl-officework "catalina.sh run" 11 days ago Up 11 days 0.0.0.0:7500->8080/tcp loclBackOfficeWork.2017.10.30.04.20.01
7bc9d7f94828 wmaker-cons-officework "catalina.sh run" 11 days ago Up 11 days 0.0.0.0:7600->8080/tcp consBackOfficeWork.2017.10.30.04.20.01
Again, the name: work-cc-backoffice has been supplanted by name: cons-cc-backoffices - work-cc-backoffice is totally gone now.
-
Looked at the file the_results_are_here (in the second run) to see if anything can be found
[... snip ...]
compose.cli.command.get_client: docker-compose version 1.17.1, build 6d101fb
docker-py version: 2.5.1
CPython version: 2.7.13
OpenSSL version: OpenSSL 1.0.1t 3 May 2016
compose.cli.command.get_client: Docker base_url: http+docker://localunixsocket
compose.cli.command.get_client: Docker version: KernelVersion=4.4.0-72-generic, Arch=amd64, BuildTime=2017-09-26T22:40:56.000000000+00:00, ApiVersion=1.32, Version=17.09.0-ce, MinAPIVersion=1.12, GitCommit=afdb6d4, Os=linux, GoVersion=go1.8.3
compose.cli.verbose_proxy.proxy_callable: docker info <- ()
compose.cli.verbose_proxy.proxy_callable: docker info -> {u'Architecture': u'x86_64',
[... snip ...]
compose.cli.verbose_proxy.proxy_callable: docker inspect_network <- (u'extraction_default')
compose.cli.verbose_proxy.proxy_callable: docker inspect_network -> {u'Attachable': True,
u'ConfigFrom': {u'Network': u''},
u'ConfigOnly': False,
u'Containers': {u'61d2e9ccbc28bb2aba918dc24b5f19a3f68a06b9502ec1b98e83dd947d75d1be': {u'EndpointID': u'e19696ccf258a6cdcfcce41d91d5b3ebcb5fffbce4257e3480ced48a3d7dcc5c',
u'IPv4Address': u'172.20.0.2/16',
u'IPv6Address': u'',
u'MacAddress': u'02:42:ac:14:00:02',
u'Name': u'work-cc-backoffice'}},
u'Created': u'2017-11-10T09:56:22.709914332Z',
u'Driver': u'bridge',
...
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=False, filters={u'label': [u'com.docker.compose.project=extraction', u'com.docker.compose.oneoff=False']})
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 1 items)
compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- (u'61d2e9ccbc28bb2aba918dc24b5f19a3f68a06b9502ec1b98e83dd947d75d1be')
compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {u'AppArmorProfile': u'docker-default',
u'Args': [u'backendworkproj/manage.py', u'runserver', u'0.0.0.0:8000'],
u'Config': {u'AttachStderr': False,
u'AttachStdin': False,
u'AttachStdout': False,
u'Cmd': [u'python3',
u'backendworkproj/manage.py',
u'runserver',
u'0.0.0.0:8000'],
u'Domainname': u'',
...
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={u'label': [u'com.docker.compose.project=extraction', u'com.docker.compose.service=web', u'com.docker.compose.oneoff=False']})
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 1 items)
compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- (u'61d2e9ccbc28bb2aba918dc24b5f19a3f68a06b9502ec1b98e83dd947d75d1be')
compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {u'AppArmorProfile': u'docker-default',
To me, it looks like the program is trying to do some initialization
by looking for a container that is already up and running(?) See pic.
below. How can one change this behavior
Answer from #mikeyjk resolved the issue.
No worries. I wonder if you give each service a unique name, re run
composer build, whether the issue still occurs. I'll try and replicate
it today if no-one can work it out

How to use docker run with a Meteor image?

I have 2 containers mgmt-app who is a Meteor container and mgmt-mongo who is the MongoDB.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7b65be4ac454 gitlab-lab:5005/dfc/mongo:latest "/entrypoint.sh mongo" About an hour ago Up About an hour 27017/tcp mgmt-mongo
dff0b3c69c5f gitlab-lab:5005/dfc/mgmt-docker-gui:lab "/bin/sh -c 'sh $METE" About an hour ago Up 42 minutes 0.0.0.0:80->80/tcp mgmt-app
From my Docker host I want to run docker run gitlab-lab:5005/dfc/mgmt-docker-gui:lab ls -al
but I have this error:
=> Starting meteor app on port:80
/app/programs/server/node_modules/fibers/future.js:280
throw(ex);
^
Error: MONGO_URL must be set in environment
So I tried:
docker run -e "MONGO_URL=mongodb://mgmt-mongo:27017/meteor" gitlab-lab:5005/dfc/mgmt-docker-gui:lab ls -al
and then the error was:
/app/programs/server/node_modules/fibers/future.js:313
throw(ex);
^
MongoError: failed to connect to server [mgmt-mongo:27017] on first connect
I really don't understand because when I do a docker-compose up -d with this file:
mgmt-app:
image: gitlab-lab:5005/dfc/mgmt-docker-gui:latest
container_name: mgmt-app
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- $HOME/.docker:/root/.docker
- /home/dockeradm/compose/area:/home/dockeradm/compose/area
environment:
- ROOT_URL=http://localhost:80
- MONGO_URL=mongodb://mgmt-mongo:27017/meteor
ports:
- 80:80
restart: always
mgmt-mongo:
image: gitlab-lab:5005/dfc/mongo:latest
container_name: mgmt-mongo
volumes:
- mgmt_mongo_data_config:/data/configdb
- mgmt_mongo_data_db:/data/db
restart: always
everything go well.
So my request is, how should I do my docker run to execute my command ? (the command is not a simple ls -al but it's ok for the demo)
When you run the containers separately with docker run, they are not linked on the same docker network so the mongo container is not accessible from the app container. To remedy this, you should use either:
--link to mark the app container as linked to the mongo container. This works, but is deprecated.
a defined docker network for both containers to be linked by; this is more complex, but is the recommended architecture
By contrast, docker-compose automatically adds both containers to the same docker network, so they are immediately connectable without any extra configuration required:
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.

Hyperledger on mac using Docker images cannot find Docker daemon?

I am following this article : https://developer.ibm.com/opentech/2016/06/27/running-hyperledger-fabric-natively-on-mac/
then I got this error :
15:32:16.165 [dockercontroller] deployImage -> ERRO 052 Error building images: cannot connect to Docker endpoint
It seems that the Docker daemon is not accessible from the running container. The config points to CORE_VM_ENDPOINT=http://127.0.0.1:2375
I have a Mac using "Docker Beta"
Any idea ?
if your CORE_VM_ENDPOINT is a sock file, you need to mouth /var/run/docker.sock into peer container.
Add the lines below into docker-compose.yml to mouth /var/run/docker.sock
volumes:
- /var/run/docker.sock:/var/run/docker.sock

Resources