I am looking to push a custom docker image to OpenShift Online 3 to run container instances there. I have seen many instructions on forums / blogs about how to do this, but the first part of the process seems to be eluding me.
This is one of the references I'm using: link
I log in using the oc command:
oc login https://api.starter-us-west-2.openshift.com --token=xxxxxxx
This gets me in and I can run the command to return the running services (one of which should be the docker instance):
oc get svc
But the response I get is simply:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
test-phil4 172.30.217.192 <none> 8080/TCP 13h
I was expecting to see lines for a docker instance that I could connect to. I think I need to 'expose' this, the command should be:
oc expose service docker-registry
but without seeing the service there is the list of services, I'm not sure how I can do that - and the result is - predictably:
error: services "docker-registry" not found
I feel like this is to do with the permissions on my user - I have currently granted my user 'image-pusher', 'image-builder', 'registry-admin' and 'cluster-status'. There are many more options, most of which I don't seem to be able to apply.
Perhaps this is not possible with the free-tier, or perhaps not available within the online version at all? Would anyone know how to go about connecting my existing docker repo to the OpenShift repo I'm connected to and uploading my custom images?
Thanks,
Phil
OpenShift Online clusters have their registry exposed at registry.<cluster-id>.openshift.com. So, for your example, to login to the registry for starter-us-west-2, after logging in to the cluster, you would run
docker login registry.starter-us-west-2.openshift.com -u $(oc whoami) -p $(oc whoami -t)
You can then push and pull from your project with
docker push registry.<cluster-id>.openshift.com/<project_name>/<image-name>:<image-tag>
docker pull registry.<cluster-id>.openshift.com/<project_name>/<image-name>:<image-tag>
Note: to docker push you have to have already tagged your local image as registry.<cluster-id>.openshift.com/<project_name>/<image-name>:<image-tag>
Related
I'm running Docker Desktop 3.6.0 on Windows 10 with WSL2.
When I try to enable Kubernetes I only see "Failed to start" within the Docker Desktop UI.
Docker itself works fine. Not sure how I can get any further logs.
Here the output from kubectl version:
kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T21:04:39Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"windows/amd64"}
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding
From other posts it seems that and internet connections is required for initial setup:
https://stackoverflow.com/a/52765732/1100559
https://stackoverflow.com/a/63318739/1100559
Direct internet connection is not possible on my work environment, I can only manually copy required images on my pc.
I also do not have admin access.
Is there a way to manually setup Kubernetes on Docker Desktop or somehow indicate where the required images can be found?
I have a nexus Docker repository where I can push required images to.
I have changed the ~\.docker\daemon.json and added my docker repository in insecure-registries. After first login docker is able to pull images from there and run them.
Already tried to reset or enable and disable Kubernetes. Also deleting ~/.kube/config did not work.
High level answer...
Get a docker registry
If you work for an old skool cool enterprise; use JFrog Artifactory
If you just want to get it to work; use Harbor
GitHub and GitLab (depending on license) have registries available too...
Edit the docker daemon on the kubernetes nodes (your workstation) to only pull from these registries.
if redhat; /etc/containers/registries.conf
if debain; /etc/docker/daemon.json
you might be able to hack a /etc/hosts entry too...
Populate the new registry
Run kubernetes and yoiu should be good to go. Depending on the configuration you choose you may need to add a registry credential secret.
I had a kiwi instance running as docker containers under RHEL8 with kiwitcms/kiwi:latest-image as kiwi_web container and centos/postgresql-12-centos7-image as kiwi_db container. Provided via reverse proxy in an existing apache.
I was able to login as the created superuser.
Then I've installed multi-tenant support via pip install kiwitcms-tenants.
I've set the KIWI_TENANTS_DOMAIN variable.
I did podman exec -it kiwi_web /Kiwi/manage.py migrate and podman exec -it kiwi_web /Kiwi/manage.py refresh_permissions.
Then I've created a tenant via podman exec -u 0 -it kiwi_web /Kiwi/manage.py create_tenant.
Now, if I am still logged in (from the session before installing multi-tenant support), I can now see the new Mandant plugin and the tenant configurations in the admin area.
But, if I logout, I can't login anymore. It does not say "wrong credentials" or something like that, like it appears if I put in wrong credentials. The fields are just emptied and I am simply not forwarded. What am I missing here?
What am I missing here?
I think you are missing the fact that tenant routing is done on a domain basis. The domain which you configure with the create_tenant command is the one you should be using to access the multi-tenant Kiwi TCMS instance.
If KIWI_TENANTS_DOMAIN=example.com then you should use either example.com in create_tenant or something like public.example.com. Every other tenant will be <tenant name>.example.com.
If that doesn't work you need to provide more information starting with your reverse proxy logs.
I have developed Spring Boot applications. I have setup admin and RabbitMQ as well as spring cloud bus. When i refresh the end points of applications, it refreshes the properties for application.
Can anyone please help me how to setup RabbitMQ in kubernetes now? I did research to an extent and found in few articles that it needs to be deployed as "Statefulset" rather than "Deployment" https://notallaboutcode.blogspot.de/2017/09/rabbitmq-on-kubernetes-container.html. I could not get why this needs to be done exactly. Also any useful link on deploying RabbitMQ in kubernetes would help.
It depends on what you're looking to do and what tools you have available. I guess your current setup is much like that described in http://www.baeldung.com/spring-cloud-bus. One approach to porting that to kubernetes might be to try to get your setup working with docker-compose first and then you could port that docker-compose to kubernetes deployment descriptors.
A simple way to deploy rabbitmq in k8s would be to set up a Deployment using a rabbitmq docker image. An example of this is https://github.com/Activiti/activiti-cloud-examples/blob/fe732096b5a19de0ad44879a399053f6ae02b095/kubernetes/kubectl/infrastructure.yml#L17. (Notice that file isn't radically different from a docker-compose file so you could port from one to the other.) But that won't be persisting data outside of the Pods so if the cluster were to go down or the Pod/s were to go down then you'd lose message data. The persistence is ephemeral.
So to have non-ephemeral persistence you could instead use a StatefulSet as in the example you point to. Another example is https://wesmorgan.svbtle.com/rabbitmq-cluster-on-kubernetes-with-statefulsets
If you are using helm (or can use helm) then you could use the rabbitmq helm chart, which uses a StatefulSet.
But if your only reason for needing the bus is to trigger refreshes when property changes happen then there are alternative paths available with Kubernetes. I'm guessing you need the hot reloads so you could look at using https://github.com/fabric8io/spring-cloud-kubernetes#propertysource-reload Or if you need the config to come from git specifically then you could look at http://fabric8.io/guide/develop/configuration.html (If you didn't need the hot reloads or git then you could consider versioning your configmaps and upgrading them with your application upgrades like in https://dzone.com/articles/configuring-java-apps-with-kubernetes-configmaps-a )
If you have installed helm in your cluster
helm install stable/rabbitmq
This will install rabbitmqserver on your cluster, the following commands are for obtaining the password and erlang cookie, replace prodding-wombat-rabbitmq for w/e kubernetes decides to name the pod.
kubectl get secret --namespace default prodding-wombat-rabbitmq -o jsonpath="{.data.rabbitmq-password}" | base64 --decode
kubectl get secret --namespace default prodding-wombat-rabbitmq -o jsonpath="{.data.rabbitmq-erlang-cookie}" | base64 --decode
To connect to the pod:
export POD_NAME=$(kubectl get pods --namespace default -l "app=prodding-wombat-rabbitmq" -o jsonpath="{.items[0].metadata.name}")
Then prorxy to localhost so you can connect in your browswer
kubectl port-forward $POD_NAME 5672:5672 15672:15672
I'm very new to kubernetes and trying to conceptualize it as well as set it up locally in order to try developing something on it.
There's a confound though that I am running on a windows machine.
Their "getting started" documentation in github says you have to run Linux to use kubernetes.
As docker runs on windows, I was wondering if it was possible to create a kubernetes instance as a container in windows docker and use it to manage the rest of the cluster in the same windows docker instance.
From reading the setup instructions, it seems like docker, kubernetes, and something called etcd all have to run "in parallel" on a single host operating system... But part of me thinks it might be possible to
Start docker, boot 'default' machine.
Create kubernetes container - configure to communicate with the existing docker 'default' machine
Use kubernetes to manage existing docker.
Pipe dream? Wrongheaded foolishness? I see there are some options around running it in a vagrant instance. Does that mean docker, etcd, & kubernetes together in a single VM (which in turn creates a cluster of virtual machines inside it?)
I feel like I need to draw a picture of what this all looks like in terms of physical hardware and "memory boxes" to really wrap my head around this.
With Windows, you need docker-machine and boot2docker VMs to run anything docker related.
There is no (not yet) "docker for Windows".
Note that issue 7428 mentioned "Can't run kubernetes within boot2docker".
So even when you follow instructions (from a default VM created with docker-machine), you might still get errors:
➜ workspace docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.14.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests
ee0b490f74f6bc9b70c1336115487b38d124bdcebf09b248cec91832e0e9af1d
➜ workspace docker logs -f ee0b490f74f6bc9b70c1336115487b38d124bdcebf09b248cec91832e0e9af1d
W0428 09:09:41.479862 1 server.go:249] Could not load kubernetes auth path: stat : no such file or directory. Continuing with defaults.
I0428 09:09:41.479989 1 server.go:168] Using root directory: /var/lib/kubelet
The alternative would be to try on a full-fledge Linux VM (like the latest Ubuntu), instead of a boot2docker-like VM (based on a TinyCore distro).
All k8s components can be raised up with hyperkube, which helps you bring up a containerized one.
If you're able to run docker on windows, it would probably work. I haven't tried it on windows personally.
I have installed Python (with Pip and easysetup), Cloud Foundry and ICE in my host machine, OS X 10.10.3.
I've booted boot2docker and attempted to ice login.
After a successful login attempt:
mbp-idan:~ idanadar$ boot2docker up
Waiting for VM and Docker daemon to start...
.o
Started.
Writing /Users/idanadar/.boot2docker/certs/boot2docker-vm/ca.pem
Writing /Users/idanadar/.boot2docker/certs/boot2docker-vm/cert.pem
Writing /Users/idanadar/.boot2docker/certs/boot2docker-vm/key.pem
Your environment variables are already set correctly.
mbp-idan:~ idanadar$ ice login
API endpoint: https://api.ng.bluemix.net
Email> my-email-address
Password> my-password
Authenticating...
OK
Targeted org my-email-address
Targeted space dev
API endpoint: https://api.ng.bluemix.net (API version: 2.23.0)
User: my-email-address
Org: my-email-address
Space: dev
Authentication with container cloud service at https://api-ice.ng.bluemix.net/v2/containers completed successfully
You can issue commands now to the container service
I immediately encounter the following errors:
Authentication issue:
Proceeding to authenticate with the container cloud registry at registry-ice.ng.bluemix.net
FATA[0005] Error response from daemon: Login: You must set a namespace before you login to the registry. See 'ice help namespace' (Code: 404; Headers: map[Connection:[Keep-Alive] Date:[Wed, 27 May 2015 18:57:41 GMT] Content-Type:[text/plain] X-Client-Ip:[79.176.226.146] X-Backside-Transport:[FAIL FAIL] Server:[nginx/1.7.9] X-Global-Transaction-Id:[380677271] Set-Cookie:[DPJSESSIONID=PBC5YS:481842763; Path=/; Domain=.registry-ice.ng.bluemix.net]])
Docker issue:
docker is not available on your system or is not properly configured
Could not authenticate with cloud registry at registry-ice.ng.bluemix.net
You can still use IBM Containers but will not be able to run local docker containers, push, or pull images
The only configuration I did previously was adding the following to ~/.bash_profile, which is what provided by Docker when using boot2docker up:
export DOCKER_HOST=tcp://192.168.59.103:2376
export DOCKER_CERT_PATH=/Users/idanadar/.boot2docker/certs/boot2docker-vm
export DOCKER_TLS_VERIFY=1
Once I did this change, I get the above two errors. If I will comment out the above three lines from .bash_profile, and not run boot2docker shellinit after boot2docker up, I will get this error:
FATA[0000] Post http:///var/run/docker.sock/v1.18/auth: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?
docker is not available on your system or is not properly configured
Could not authenticate with cloud registry at registry-ice.ng.bluemix.net
You can still use IBM Containers but will not be able to run local docker containers, push, or pull images
If I will replace the three lines with this single line:
export DOCKER_HOST=tcp://$(boot2docker ip 2>/dev/null):2375
I will get the following error, which is a bit different. Note the -d and lack of error regarding namespace.
Proceeding to authenticate with the container cloud registry at registry-ice.ng.bluemix.net
FATA[0000] Cannot connect to the Docker daemon. Is 'docker -d' running on this host?
docker is not available on your system or is not properly configured
Could not authenticate with cloud registry at registry-ice.ng.bluemix.net
You can still use IBM Containers but will not be able to run local docker containers, push, or pull images
[docker] Any idea which is the right way to get Docker recognized?
This is being tested in OS X 10.10.3.
[bluemix] Any idea about the namespace?
For some reason they seem inter-linked?
The error that is outputted by ICE is really unhelpful.
To solve it:
Added back to ~/.bash_profile the original 3 lines
Created the namespace in Bluemix.net
After that, everything has fallen to place and everything is working.