I'm running Docker Desktop 3.6.0 on Windows 10 with WSL2.
When I try to enable Kubernetes I only see "Failed to start" within the Docker Desktop UI.
Docker itself works fine. Not sure how I can get any further logs.
Here the output from kubectl version:
kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T21:04:39Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"windows/amd64"}
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding
From other posts it seems that and internet connections is required for initial setup:
https://stackoverflow.com/a/52765732/1100559
https://stackoverflow.com/a/63318739/1100559
Direct internet connection is not possible on my work environment, I can only manually copy required images on my pc.
I also do not have admin access.
Is there a way to manually setup Kubernetes on Docker Desktop or somehow indicate where the required images can be found?
I have a nexus Docker repository where I can push required images to.
I have changed the ~\.docker\daemon.json and added my docker repository in insecure-registries. After first login docker is able to pull images from there and run them.
Already tried to reset or enable and disable Kubernetes. Also deleting ~/.kube/config did not work.
High level answer...
Get a docker registry
If you work for an old skool cool enterprise; use JFrog Artifactory
If you just want to get it to work; use Harbor
GitHub and GitLab (depending on license) have registries available too...
Edit the docker daemon on the kubernetes nodes (your workstation) to only pull from these registries.
if redhat; /etc/containers/registries.conf
if debain; /etc/docker/daemon.json
you might be able to hack a /etc/hosts entry too...
Populate the new registry
Run kubernetes and yoiu should be good to go. Depending on the configuration you choose you may need to add a registry credential secret.
Related
I am on a corporate Windows laptop and I want to start experimenting with Docker. Being a corporate machine, everything needs to go through the corporate proxy.
I installed Debian on WSL and then the Docker Desktop, which installed its components on the Debian WSL VM. My first priority however was to test docker on WSL directly and not through Docker Desktop. So I set to read the Docker docs and download the docker/getting-started image through the Debian terminal. That, however, failed due to not using the network proxy.
Desktop Docker docs state that setting the proxy settings on Docker Desktop will propagate the proxy settings to Docker itself. Indeed, I set the proxy settings on Docker Desktop, and I was now able to properly download my image from inside Debian.
Since I want to have full control of Docker through the Debian terminal and not Docker Desktop, I want to understand in which way the proxy settings propagate to Docker inside WSL. I imagined that Docker Desktop altered some configuration file inside Debian, but a grep on the whole system of the proxy ip got me nothing. So my question is, in what way does the Docker Desktop let Docker know which proxy to use?
As much as I know, And am not 100% sure as I have not worked with docker in a while.
When you start docker service in WSL, this will trigger the init.d/docker script, And when you set the Company proxy manually in docker desktop, The loading time is :
Stopping Docker service
Updating configuration Script at /etc/init.d/docker
Starting the service again, and with it the new script
And to make sure that this is valid, You can try to check the /etc/init.d/docker script contents.
and as an alternative way of not adding the scripts manually. you can export the proxy configuration in WSL, and check if it will work without adding the proxy configuration to Docker Desktop.
I am looking to push a custom docker image to OpenShift Online 3 to run container instances there. I have seen many instructions on forums / blogs about how to do this, but the first part of the process seems to be eluding me.
This is one of the references I'm using: link
I log in using the oc command:
oc login https://api.starter-us-west-2.openshift.com --token=xxxxxxx
This gets me in and I can run the command to return the running services (one of which should be the docker instance):
oc get svc
But the response I get is simply:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
test-phil4 172.30.217.192 <none> 8080/TCP 13h
I was expecting to see lines for a docker instance that I could connect to. I think I need to 'expose' this, the command should be:
oc expose service docker-registry
but without seeing the service there is the list of services, I'm not sure how I can do that - and the result is - predictably:
error: services "docker-registry" not found
I feel like this is to do with the permissions on my user - I have currently granted my user 'image-pusher', 'image-builder', 'registry-admin' and 'cluster-status'. There are many more options, most of which I don't seem to be able to apply.
Perhaps this is not possible with the free-tier, or perhaps not available within the online version at all? Would anyone know how to go about connecting my existing docker repo to the OpenShift repo I'm connected to and uploading my custom images?
Thanks,
Phil
OpenShift Online clusters have their registry exposed at registry.<cluster-id>.openshift.com. So, for your example, to login to the registry for starter-us-west-2, after logging in to the cluster, you would run
docker login registry.starter-us-west-2.openshift.com -u $(oc whoami) -p $(oc whoami -t)
You can then push and pull from your project with
docker push registry.<cluster-id>.openshift.com/<project_name>/<image-name>:<image-tag>
docker pull registry.<cluster-id>.openshift.com/<project_name>/<image-name>:<image-tag>
Note: to docker push you have to have already tagged your local image as registry.<cluster-id>.openshift.com/<project_name>/<image-name>:<image-tag>
I've some repositories on docker cloud. I build and deploy it on my home ubuntu server and that work well.
On my home server, I can access these services with their url (like http://registry:8761).
I'm trying to run my service on Kitematic on Windows, all of my services are running on localhost, so my configuration in application.yml where I say that my registry service is on http://registry:8761 doesn't work.
Could someone help me ?
Thanks
Your browser doesn't know what to do with http://registry since it's got no TLD qualifier. I assume your ubuntu server is set up in a way to do all the correct redirections, but with Windows you'd have to add an entry in your hosts file that points http://registry to localhost or wherever it's supposed to go
I'm very new to kubernetes and trying to conceptualize it as well as set it up locally in order to try developing something on it.
There's a confound though that I am running on a windows machine.
Their "getting started" documentation in github says you have to run Linux to use kubernetes.
As docker runs on windows, I was wondering if it was possible to create a kubernetes instance as a container in windows docker and use it to manage the rest of the cluster in the same windows docker instance.
From reading the setup instructions, it seems like docker, kubernetes, and something called etcd all have to run "in parallel" on a single host operating system... But part of me thinks it might be possible to
Start docker, boot 'default' machine.
Create kubernetes container - configure to communicate with the existing docker 'default' machine
Use kubernetes to manage existing docker.
Pipe dream? Wrongheaded foolishness? I see there are some options around running it in a vagrant instance. Does that mean docker, etcd, & kubernetes together in a single VM (which in turn creates a cluster of virtual machines inside it?)
I feel like I need to draw a picture of what this all looks like in terms of physical hardware and "memory boxes" to really wrap my head around this.
With Windows, you need docker-machine and boot2docker VMs to run anything docker related.
There is no (not yet) "docker for Windows".
Note that issue 7428 mentioned "Can't run kubernetes within boot2docker".
So even when you follow instructions (from a default VM created with docker-machine), you might still get errors:
➜ workspace docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.14.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests
ee0b490f74f6bc9b70c1336115487b38d124bdcebf09b248cec91832e0e9af1d
➜ workspace docker logs -f ee0b490f74f6bc9b70c1336115487b38d124bdcebf09b248cec91832e0e9af1d
W0428 09:09:41.479862 1 server.go:249] Could not load kubernetes auth path: stat : no such file or directory. Continuing with defaults.
I0428 09:09:41.479989 1 server.go:168] Using root directory: /var/lib/kubelet
The alternative would be to try on a full-fledge Linux VM (like the latest Ubuntu), instead of a boot2docker-like VM (based on a TinyCore distro).
All k8s components can be raised up with hyperkube, which helps you bring up a containerized one.
If you're able to run docker on windows, it would probably work. I haven't tried it on windows personally.
I have installed Python (with Pip and easysetup), Cloud Foundry and ICE in my host machine, OS X 10.10.3.
I've booted boot2docker and attempted to ice login.
After a successful login attempt:
mbp-idan:~ idanadar$ boot2docker up
Waiting for VM and Docker daemon to start...
.o
Started.
Writing /Users/idanadar/.boot2docker/certs/boot2docker-vm/ca.pem
Writing /Users/idanadar/.boot2docker/certs/boot2docker-vm/cert.pem
Writing /Users/idanadar/.boot2docker/certs/boot2docker-vm/key.pem
Your environment variables are already set correctly.
mbp-idan:~ idanadar$ ice login
API endpoint: https://api.ng.bluemix.net
Email> my-email-address
Password> my-password
Authenticating...
OK
Targeted org my-email-address
Targeted space dev
API endpoint: https://api.ng.bluemix.net (API version: 2.23.0)
User: my-email-address
Org: my-email-address
Space: dev
Authentication with container cloud service at https://api-ice.ng.bluemix.net/v2/containers completed successfully
You can issue commands now to the container service
I immediately encounter the following errors:
Authentication issue:
Proceeding to authenticate with the container cloud registry at registry-ice.ng.bluemix.net
FATA[0005] Error response from daemon: Login: You must set a namespace before you login to the registry. See 'ice help namespace' (Code: 404; Headers: map[Connection:[Keep-Alive] Date:[Wed, 27 May 2015 18:57:41 GMT] Content-Type:[text/plain] X-Client-Ip:[79.176.226.146] X-Backside-Transport:[FAIL FAIL] Server:[nginx/1.7.9] X-Global-Transaction-Id:[380677271] Set-Cookie:[DPJSESSIONID=PBC5YS:481842763; Path=/; Domain=.registry-ice.ng.bluemix.net]])
Docker issue:
docker is not available on your system or is not properly configured
Could not authenticate with cloud registry at registry-ice.ng.bluemix.net
You can still use IBM Containers but will not be able to run local docker containers, push, or pull images
The only configuration I did previously was adding the following to ~/.bash_profile, which is what provided by Docker when using boot2docker up:
export DOCKER_HOST=tcp://192.168.59.103:2376
export DOCKER_CERT_PATH=/Users/idanadar/.boot2docker/certs/boot2docker-vm
export DOCKER_TLS_VERIFY=1
Once I did this change, I get the above two errors. If I will comment out the above three lines from .bash_profile, and not run boot2docker shellinit after boot2docker up, I will get this error:
FATA[0000] Post http:///var/run/docker.sock/v1.18/auth: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?
docker is not available on your system or is not properly configured
Could not authenticate with cloud registry at registry-ice.ng.bluemix.net
You can still use IBM Containers but will not be able to run local docker containers, push, or pull images
If I will replace the three lines with this single line:
export DOCKER_HOST=tcp://$(boot2docker ip 2>/dev/null):2375
I will get the following error, which is a bit different. Note the -d and lack of error regarding namespace.
Proceeding to authenticate with the container cloud registry at registry-ice.ng.bluemix.net
FATA[0000] Cannot connect to the Docker daemon. Is 'docker -d' running on this host?
docker is not available on your system or is not properly configured
Could not authenticate with cloud registry at registry-ice.ng.bluemix.net
You can still use IBM Containers but will not be able to run local docker containers, push, or pull images
[docker] Any idea which is the right way to get Docker recognized?
This is being tested in OS X 10.10.3.
[bluemix] Any idea about the namespace?
For some reason they seem inter-linked?
The error that is outputted by ICE is really unhelpful.
To solve it:
Added back to ~/.bash_profile the original 3 lines
Created the namespace in Bluemix.net
After that, everything has fallen to place and everything is working.