Docker and Namespace-related errors after a successful login to Bluemix - macos

I have installed Python (with Pip and easysetup), Cloud Foundry and ICE in my host machine, OS X 10.10.3.
I've booted boot2docker and attempted to ice login.
After a successful login attempt:
mbp-idan:~ idanadar$ boot2docker up
Waiting for VM and Docker daemon to start...
.o
Started.
Writing /Users/idanadar/.boot2docker/certs/boot2docker-vm/ca.pem
Writing /Users/idanadar/.boot2docker/certs/boot2docker-vm/cert.pem
Writing /Users/idanadar/.boot2docker/certs/boot2docker-vm/key.pem
Your environment variables are already set correctly.
mbp-idan:~ idanadar$ ice login
API endpoint: https://api.ng.bluemix.net
Email> my-email-address
Password> my-password
Authenticating...
OK
Targeted org my-email-address
Targeted space dev
API endpoint: https://api.ng.bluemix.net (API version: 2.23.0)
User: my-email-address
Org: my-email-address
Space: dev
Authentication with container cloud service at https://api-ice.ng.bluemix.net/v2/containers completed successfully
You can issue commands now to the container service
I immediately encounter the following errors:
Authentication issue:
Proceeding to authenticate with the container cloud registry at registry-ice.ng.bluemix.net
FATA[0005] Error response from daemon: Login: You must set a namespace before you login to the registry. See 'ice help namespace' (Code: 404; Headers: map[Connection:[Keep-Alive] Date:[Wed, 27 May 2015 18:57:41 GMT] Content-Type:[text/plain] X-Client-Ip:[79.176.226.146] X-Backside-Transport:[FAIL FAIL] Server:[nginx/1.7.9] X-Global-Transaction-Id:[380677271] Set-Cookie:[DPJSESSIONID=PBC5YS:481842763; Path=/; Domain=.registry-ice.ng.bluemix.net]])
Docker issue:
docker is not available on your system or is not properly configured
Could not authenticate with cloud registry at registry-ice.ng.bluemix.net
You can still use IBM Containers but will not be able to run local docker containers, push, or pull images
The only configuration I did previously was adding the following to ~/.bash_profile, which is what provided by Docker when using boot2docker up:
export DOCKER_HOST=tcp://192.168.59.103:2376
export DOCKER_CERT_PATH=/Users/idanadar/.boot2docker/certs/boot2docker-vm
export DOCKER_TLS_VERIFY=1
Once I did this change, I get the above two errors. If I will comment out the above three lines from .bash_profile, and not run boot2docker shellinit after boot2docker up, I will get this error:
FATA[0000] Post http:///var/run/docker.sock/v1.18/auth: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?
docker is not available on your system or is not properly configured
Could not authenticate with cloud registry at registry-ice.ng.bluemix.net
You can still use IBM Containers but will not be able to run local docker containers, push, or pull images
If I will replace the three lines with this single line:
export DOCKER_HOST=tcp://$(boot2docker ip 2>/dev/null):2375
I will get the following error, which is a bit different. Note the -d and lack of error regarding namespace.
Proceeding to authenticate with the container cloud registry at registry-ice.ng.bluemix.net
FATA[0000] Cannot connect to the Docker daemon. Is 'docker -d' running on this host?
docker is not available on your system or is not properly configured
Could not authenticate with cloud registry at registry-ice.ng.bluemix.net
You can still use IBM Containers but will not be able to run local docker containers, push, or pull images
[docker] Any idea which is the right way to get Docker recognized?
This is being tested in OS X 10.10.3.
[bluemix] Any idea about the namespace?
For some reason they seem inter-linked?

The error that is outputted by ICE is really unhelpful.
To solve it:
Added back to ~/.bash_profile the original 3 lines
Created the namespace in Bluemix.net
After that, everything has fallen to place and everything is working.

Related

Cannot pull image from AWS ECR repository using docker with VirtualBox or Colima

The Situation
As you may all know, Docker has changed its license for Docker desktop to limit free usage for limited use cases.
As a result, I have resorted to alternatives such as Colima and use of virtual box as a means to continue using docker CLI while respecting Docker's new changes.
While it works fine for pulling images from Docker Hub, I've noticed that I can no longer pull images from my company's AWS ECR repo. The reason is due to unknown certificate authority issues.
My understanding of how docker runs is limited, but the gist I got from this stackoverflow post is that docker CLI acts as the client for the developer to send commands to the Docker Daemon that runs on a virtual machine. So this issue is most likely related to the VM that the docker daemon is running on.
The Error Message
Pulling from myrepo/myapp
5ad559c5ae16: Pulling fs layer
d7a7f7e76287: Pulling fs layer
3eb3e996f0d7: Pulling fs layer
d8f3fbab0eaf: Waiting
d310dd0da683: Waiting
6f542466a6be: Waiting
8851a2099770: Waiting
f1dd90cdff4b: Waiting
4a852bd6c6f1: Waiting
538106d55e7d: Waiting
dbc972867db8: Waiting
2bc8828e78a2: Waiting
1a653b47f557: Waiting
877c2f613a70: Waiting
09eac264496b: Waiting
66dd8ce5c695: Waiting
ccde39d6cfef: Waiting
4351b359c9e4: Waiting
52e095209afc: Waiting
c6ad9f161855: Waiting
233f3e28c5a3: Waiting
error pulling image configuration: Get https://prod-ca-central-1-starport-layer-bucket.s3.ca-central-1.amazonaws.com/<a-very-long-hash>/<another-very-long-hash>?X-Amz-Security-Token=<AWS-security-token>&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20220210T215140Z&X-Amz-SignedHeaders=host&X-Amz-Expires=3600&X-Amz-Credential=<my-credential>&X-Amz-Signature=<amazon-signature>: x509: certificate signed by unknown authority
My hypothesis for why I'm getting this error message
This is purely a guess. Please feel free to correct me.
I know that with Docker Desktop, I do not get this certificate error and my guess is that with the integration of hyperkit, it the VM can run via localhost, which will allow Docker Daemon to tap into macOS' trusted certificate authority certs.
The problem now arises because the VM that I've obtained from the Internet now no longer has access to those trusted certs.
What I've tried
Ensure I've logged into ECR using AWS command aws ecr get-login-password --region ca-central-1 | docker login --username AWS --password-stdin <my-aws-account-id>.dkr.ecr.ca-central-1.amazonaws.com
reinstall both Colima and the virtual box hypervisor
Isolate the issue by experimenting solely on virtual box setup.
I noticed that the folder /etc/docker is present on the VM. From Docker's documentation, the default directory for certificates for docker is in /etc/docker/certs.d to which I noticed it is absent in my Virtual Machine installation.
I think I'm close to a solution, but I'm quite new to how certificates work and I'm not sure where I can obtain the certificates I need to put them into that path to test.
Does anyone know how this can be done?
I got into same issue, I did this and it worked
Remove the line "credsStore": "xxx" from ~/.docker/config.json.

Kubernetes fails to start on Docker Desktop without direct internet access

I'm running Docker Desktop 3.6.0 on Windows 10 with WSL2.
When I try to enable Kubernetes I only see "Failed to start" within the Docker Desktop UI.
Docker itself works fine. Not sure how I can get any further logs.
Here the output from kubectl version:
kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T21:04:39Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"windows/amd64"}
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding
From other posts it seems that and internet connections is required for initial setup:
https://stackoverflow.com/a/52765732/1100559
https://stackoverflow.com/a/63318739/1100559
Direct internet connection is not possible on my work environment, I can only manually copy required images on my pc.
I also do not have admin access.
Is there a way to manually setup Kubernetes on Docker Desktop or somehow indicate where the required images can be found?
I have a nexus Docker repository where I can push required images to.
I have changed the ~\.docker\daemon.json and added my docker repository in insecure-registries. After first login docker is able to pull images from there and run them.
Already tried to reset or enable and disable Kubernetes. Also deleting ~/.kube/config did not work.
High level answer...
Get a docker registry
If you work for an old skool cool enterprise; use JFrog Artifactory
If you just want to get it to work; use Harbor
GitHub and GitLab (depending on license) have registries available too...
Edit the docker daemon on the kubernetes nodes (your workstation) to only pull from these registries.
if redhat; /etc/containers/registries.conf
if debain; /etc/docker/daemon.json
you might be able to hack a /etc/hosts entry too...
Populate the new registry
Run kubernetes and yoiu should be good to go. Depending on the configuration you choose you may need to add a registry credential secret.

M1 mac cannot run jboss/keycloak docker image

Switched to m1 mac a week ago and I cannot get my application up and running with docker because of the jboss/keycloak image not working as expected. Getting the following message from the container when trying to access localhost:8080
12:08:12,456 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-5) MSC000001: Failed to start service org.wildfly.network.interface.private: org.jboss.msc.service.StartException in service org.wildfly.network.interface.private: WFLYSRV0082: failed to resolve interface private
12:08:12,526 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([("interface" => "private")]) - failure description: {"WFLYCTL0080: Failed services" => {"org.wildfly.network.interface.private" => "WFLYSRV0082: failed to resolve interface private"}}
12:08:13,463 ERROR [org.jboss.as] (Controller Boot Thread) WFLYSRV0026: Keycloak 12.0.4 (WildFly Core 13.0.3.Final) started (with errors) in 20826ms - Started 483 of 925 services (54 services failed or missing dependencies, 684 services are lazy, passive or on-demand)
Tried with all image versions and all behave the same. Has anyone managed to run this image without issues? Thanks
Also you can build the keycloak docker image locally, I was able to start keycloak after doing that. Here are the steps I follow;
Clone Keycloak containers repository: git clone git#github.com:keycloak/keycloak-containers.git
Open server directory (cd keycloak-containers/server)
Checkout at desired version, eg. git checkout 12.0.4
Build docker image docker build -t jboss/keycloak:12.0.4 .
Run Keycloak docker run --rm -p 9080:8080 -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=admin jboss/keycloak:12.0.4
Using this image, I am now able to startup keycloak. https://hub.docker.com/r/wizzn/keycloak
For Keycloak 16, docker 20.10 and docker-compose 1.29, this image works flawlessly: https://hub.docker.com/r/sleighzy/keycloak - as suggested by #zakjan.
A service like:
keycloak:
image: sleighzy/keycloak
environment:
... your Keycloak config
Should be enough to get up and running.
I'm on an m1 and I ran this and it worked.
docker run --platform=linux/amd64 -p 8080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin quay.io/keycloak/keycloak:17.0.0 start-dev
I merely add --platform=linux/amd64 to their docker command I found in https://www.keycloak.org/getting-started/getting-started-docker
The location for building a quarkus version of keycloak has changed, so this method will not work anymore for any major releases greater than 16. But the following script will. Just save it as an sh. file and execute it in your terminal. By enabling the last line, this will also directly start an instance of Keycloak.
The version number can be changed, but this is only tested for M1 chips and version 17.0.0.
VERSION=17.0.0 # set version here
cd /tmp
git clone git#github.com:keycloak/keycloak.git
cd keycloak/quarkus/container
git checkout $VERSION
docker build -t "quarkus-keycloak:$VERSION" .
#docker run -p 8080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin "quarkus-keycloak:$VERSION" start-dev --http-relative-path /auth
There is an update to this issue - images for AMD64 and ARM64 architectures are now available and can be found here: https://quay.io/repository/keycloak/keycloak?tab=tags.
Ref the discussions in Github (https://github.com/keycloak/keycloak-containers/issues/341 and https://github.com/keycloak/keycloak/issues/8825).
jboss/keycloak not supported arm64 for now.
But you can use that image on docker hub: mihaibob/keycloak
https://hub.docker.com/r/mihaibob/keycloak
I'm using this and haven't difference.
I don't have a mac but I just started working with jboss/keycloak lately and have been able to get it to start.
Essentially what I did (assuming docker is installed):
docker pull jboss/keycloak:16.1.0
docker run --env-file targetDB.txt -p 8080:8080 jboss/keycloak:16.1.0
Might have to do those commands with sudo
This pulls the jboss/keycloak image from docker hub and then it runs it exposing the port 8080 within the container to the host machine. It also uses the environment variables in the .txt file (which contains info on the database endpoint you wish to connect keycloak to to persist data).
If you don't specify --env-file <text file> I believe keycloak uses its default h2 Database which isn't the best.
I have my local jboss/keycloak pointing to an postgres db I have in an AWS RDS environment, so the contents of the targetDB.txt for me is:
DB_VENDOR=postgres
DB_ADDR=<my postgres aws rds endpoint>:5432
DB_DATABASE=<name of the database>
DB_USER=<db username to connect to postgres instance>
DB_PASSWORD=<password associated with db username to connect>
If I'm not mistaken the name of the Database in DB_DATABASE field must already exist. So you'll need to create that before running the docker run command.
After you do the docker run command above and the logs show it starting up you should be able to access the keycloak admin console on your local browser:
http://localhost:8080/auth
If this is the first time you're running keycloak you have to create a master/admin user before you can log in.
To add a master user, run these commands (while your keycloak is already running):
docker exec <container id or container name> /opt/jboss/keycloak/bin/add-user-keycloak.sh -u <USERNAME> -p <PASSWORD>
then you need to restart your keycloak container:
docker restart <container id or container name>
Again you might have to do those commands with sudo.
After thats done, go back to your local web browser http://localhost:8080/auth and you can now access the login page and actually login with the username and password you created above.

How to enable Docker API access from Windows running Docker Toolbox (docker machine)

I am running the latest Docker Toolbox, using latest Oracle VirtualBox, with Windows 7 as a host OS.
I am trying to enable non-TLS access to Docker remote API, so I could use Postman REST client running on Windows and hit docker API running on docker-machine in the VirtualBox. I found that if Docker configuration included -H tcp://0.0.0.0:2375, that would do the trick exposing the API on port 2375 of the docker machine, but for the life of me I can't find where this configuration is stored and can be changed.
I did docker-machine ssh from the Toolbox CLI, and then went and pocked around the /etc/init.d/docker file, but no changes to the file survive docker-machine restart.
I was able to find answer to this question for Ubuntu and OSX, but not for Windows.
#CarlosRafaelRamirez mentioned the right place, but I will add a few details and provide more detailed, step-by-step instructions, because Windows devs are often not fluent in Linux ecosystem.
Disclaimer: following steps make it possible to hit Docker Remote API from Windows host, but please keep in mind two things:
This should not be done in production as it makes Docker machine very not secure.
Current solution disables most of the docker-machine and all docker CLI functionality. docker-machine ssh remains operational, forcing one to SSH into docker machine to access docker commands.
Solution
Now, here are the steps necessary to switch Docker API to non-TLS port. (Docker machine name is assumed to be "default". If your machine name has a different name, you will need to specify it in the commands below.)
Start "Docker Quickstart Terminal". It starts Bash shell and is the place where all following commands will be run. Run docker-machine ip command and note the IP address of the docker host machine. Then do
docker-machine ssh
cd /var/lib/boot2docker
sudo vi profile This starts "vi" editor in elevated privileges mode required for editing "profile" file, where Docker host settings are. (If as a Windows user you are not familiar with vi, here's is super-basic crash course on it. When file is open in the vi, vi is not in editing mode. Press "i" to start edit mode. Now you can make changes. After you made all the changes, hit Esc and then ZZ to save changes and exit vi. If you need to exit vi without saving changes, after Esc please type :q! and hit Enter. ":" turns on vi's command mode, and "q!" command means exit without saving. Detailed vi command info is here.)
Using vi, change DOCKER_HOST to be DOCKER_HOST='-H tcp://0.0.0.0:2375', and set DOCKER_TLS=no. Save changes as described above.
exit to leave SSH session.
docker-machine restart
After doocker machine has restarted, your sould be able to hit docker API URL, like http://dokerMachineIp:2375/containers/json?all=1, and get valid JSON back.
This is the end of steps required to achieve the main goal.
However, if at this point you try to run docker-machine config or docker images, you will see an error message indicating that docker CLI client is trying to get to the Docker through the old port/TLS settings, which is understandable. What was not expected to me though, is that even after I followed all the Getting Started directions, and ran export DOCKER_HOST=tcp://192.168.99.101:2375 and export DOCKER_TLS_VERIFY=0, resulting in
$ env | grep DOCKER
DOCKER_HOST=tcp://192.168.99.101:2375
DOCKER_MACHINE_NAME=default
DOCKER_TLS_VERIFY=0
DOCKER_TOOLBOX_INSTALL_PATH=C:\Program Files\Docker Toolbox
DOCKER_CERT_PATH=C:\Users\USERNAME\.docker\machine\machines\default
the result was the same:
$ docker-machine env
Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host
"192.168.99.101:2376"
If you see a problem with how I changed environment variables to point Docker CLI to the new Docker host address, please comment.
To work around this problem, use docker-machine ssh command and run your docker commands after that.
I encountered the same problem and thanks to #VladH made it working not changing any internal Docker profile properties. All you have to do is correctly define Windows local env variables (or configure maven plugin properties, if you use io.fabric8 docker-maven-plugin).
Note that 2375 port is used for non-TLS connections, and 2376 only for TLS connections.
DOCKER_HOST=tcp://192.168.99.100:2376
DOCKER_TLS_VERIFY=0
DOCKER_TOOLBOX_INSTALL_PATH=C:\Program Files\Docker Toolbox
DOCKER_CERT_PATH=C:\Users\USERNAME\.docker\machine\machines\default

Error Pushing Docker Windows Image into Docker Hub - Error parsing HTTP response: invalid character / Request forbidden by administrative rules

I am trying to push a Windows Core Docker Image into my Docker Hub account. The error message (1) I am getting is:
$ docker push <MY_DOCKER_HUB_USERNAME>/<MY_IMAGE>
The push refers to a repository [docker.io/MY_DOCKER_HUB_USERNAME/MY_IMAGE] (len: 2)
46e2fd82ef4a: Preparing
Error parsing HTTP response: invalid character '<' looking for
beginning of value: "<html><body><h1>403 Forbidden</h1>\nRequest
forbidden by administrative rules.\n</body></html>\n\n"
Before pushing I am getting properly authenticated from my Mac OS X box by means of login usage:
$ docker login --username=<MY_USERNAME> --email=<MY_EMAIL#MY_SERVER.COM>
WARNING: login credentials saved in /Users/<MY_USERNAME>/.docker/config.json
Login Succeeded
Once I am authenticated, I see no point in getting a "403 Forbidden" error from Docker Hub. Also, it is not clear what these "administrative rules" are, but perhaps they are preventing me from getting my image pushed into Docker Hub registry. Please note that my repository is flagged as "public" as well my default policy ("Default Repository Visibility" from "Settings" in Docker Hub Dashboard).
I tried to do the same within my Windows Server Core box and was not able to get authenticated using the same credentials:
C:\>docker login --username=<MY_USERNAME> --email=<MY_EMAIL#MY_SERVER.COM>
Password:
Error response from daemon: Unexpected status code [403] :
<html><body <h1>403 Forbidden</h1>
Request forbidden by administrative rules.
</body></html>
Docker Client Version from Windows Core box:
C:\>docker --version
Docker version 1.10.0-dev, build 59a341e
Docker Client from Mac OS X box:
$ docker --version
Docker version 1.9.1, build a34a1d5
Windows Server Core version:
PS C:\> [System.Environment]::OSVersion.Version
Major Minor Build Revision
----- ----- ----- --------
10 0 10586 0
P.S.: No matter if I try to push from inside of my Mac OS X box (using my Windows Core box exposed API) or straight from inside of my Windows Core box, they will always lead to the same error message (1). It points me that the whole process depends on the authentication by the Windows Server Core box and since it is not properly working the results will always be the same.
The following answer was taken from ServerFault replica post:
At this time, that is expected behavior. Docker is still in the early
stages of Windows development. This documentation specifically states
that commands related to DockerHub are not supported yet. According to
jhowardmsft in #docker-dev (Freenode): "With (Win Server 2016)
Technical Preview 4, it should be able to push to a Docker Trusted
Registry".
Thanks to l0j1k who kindly answered based on a discussion we had in #docker-dev channel from IRC at freenode.
As for windows box while doing docker login I faced such error (guess it is similar):
docker login dockerserver.local:5006
Authenticating with existing credentials...
Login did not succeed, error: Error response from daemon: Get https://dockerserver.local:5006/v1/: unauthorized: HTTP Basic: Access denied
It was solved by running terminal window (cmd shortcut) with administrator rights:

Resources