I'm following this doc to test migrating a GCE VM to GKE, but it is unclear to me what happens to my systemd services after the migration. Usually containers are used to run a single application instead of lots of daemons.
I tried to see if systemd services are running in the Pod, but failed:
$ kubectl exec -it my-app-0 -- systemctl status
System has not been booted with systemd as init system (PID 1). Can't operate.
command terminated with exit code 1
I think the doc needs to be improved to include more details about what's going on with the Pod after the migration. In addition to systemd services, what is the entrypoint of the container in the Pod?
For migrated containers, this should give you the desired result:
kubectl exec -it my-app-0 -- bash -c "systemctl status"
Related
I will run a docker container with the command
docker run -ti --rm -p 8080:80 -v $(pwd)/my/path/to/config myimage:latest
but the plan is to write a function in a script, that start these image, while starting the VM in systemd.
Do I have to set up a service in /lib/systemd/system ?
From the docker website's documentation,
https://docs.docker.com/config/containers/start-containers-automatically/
You can use a restart policy, and parameters are,
Use a restart policy
To configure the restart policy for a container, use the --restart flag when using the docker run command. The value of the --restart flag can be any of the following:
Flag
Description
no
Do not automatically restart the container. (the default)
on-failure
Restart the container if it exits due to an error, which manifests as a non-zero exit code.
always
Always restart the container if it stops. If it is manually stopped, it is restarted only when Docker daemon restarts or the container itself is manually restarted. (See the second bullet listed in restart policy details)
unless-stopped
Similar to always, except that when the container is stopped (manually or otherwise), it is not restarted even after Docker daemon restarts.
If these do not work for your requirements, then there is information about you can use a process manager such as upstart, systemd, or supervisor lower down the webpage.
In your Dockerfile, add at the last
ENTRYPOINT service ssh restart && bash
I've used the helm chart to deploy Spark to Kubernetes in GCE. According to default configuration in values.yaml the Spark is deployed to the path /opt/spark. I've checked that Spark has deployed successfully by running kubectl --namespace=my-namespace get pods -l "release=spark". There is 1 master and 3 workers running.
However when I've tried to check Spark version by executing spark-submit --version from the Google cloud console it returned -bash: spark-submit: command not found.
I've navigated to the /opt directory and the /spark folder is missing. What should I do to be able to open Spark shell Terminal and to execute Spark commands?
You can verify by checking service
kubectl get services -n <namespace>
you can port-forward particular service and try running locally to check
kubectl port-forward svc/<service name> <external port>:<internal port or spark running port>
Locally you can try running spark terminal it will be connected to spark running on GCE instance.
If you check the helm chart document there is also options for UI you can also do same to access UI via port-forward
Access via SSH inside pod
Kubectl exec -it <spark pod name> -- /bin/bash
here you can directly run spark commands. spark-submit --version
Access UI
Access UI via port-forwarding if you have enable UI in helm chart.
kubectl port-forward svc/<spark service name> <external port>:<internal port or spark running port>
External Load balancer
This particular helm chart also creating External Load balancer you can also get External IP using
Kubectl get svc -n <namespace>
Access Shell
If want to connect via LB IP & port
./bin/spark-shell --conf spark.cassandra.connection.host=<Load balancer IP> spark.cassandra-connection.native.port=<Port>
Creating connection using port-forward
kubectl port-forward svc/<spark service name> <external(local) port>:<internal port or spark running port>
./bin/spark-shell --conf spark.cassandra.connection.host=localhost spark.cassandra-connection.native.port=<local Port>
One way would be login to pod and then run Spark commands
List the pod
kubectl --namespace=my-namespace get pods -l "release=spark"
Now, Login to the pod using following command:
kubectl exec -it <pod-id> /bin/bash
Now, you should be inside the pod and can run spark commands
spark-submit --version
Ref: https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/#getting-a-shell-to-a-container
Hope this helps.
This worked for me.
spark-shell --master k8s://localhost:32217
My spark master is a LoadBalancer exposed at localhost:32217
I'm trying to setup Kubernetes cluster with multi master and external etcd cluster. Followed these steps as described in kubernetes.io. I was able to create static manifest pod files in all the 3 hosts at /etc/kubernetes/manifests folder after executing Step 7.
After that when I executed command 'sudo kubeadmin init', the initialization got failed because of kubelet errors. Also verified journalctl logs, the error says misconfiguration of cgroup driver which is similar to this SO link.
I tried as said in the above SO link but not able to resolve.
Please help me in resolving this issue.
For installation of docker, kubeadm, kubectl and kubelet, I followed kubernetes.io site only.
Environment:
Cloud: AWS
EC2 instance OS: Ubuntu 18.04
Docker version: 18.09.7
Thanks
After searching few links and doing few trails, I am able to resolve this issue.
As given in the Container runtime setup, the Docker cgroup driver is systemd. But default cgroup driver of Kubelet is cgroupfs. So as Kubelet alone cannot identify cgroup driver automatically (as given in kubernetes.io docs), we have to provide cgroup-driver externally while running Kubelet like below:
cat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf
[Service]
ExecStart=
ExecStart=/usr/bin/kubelet --cgroup-driver=systemd --address=127.0.0.1 --pod->manifest-path=/etc/kubernetes/manifests
Restart=always
EOF
systemctl daemon-reload
systemctl restart kubelet
Moreover, no need to run sudo kubeadm init, as we are providing --pod-manifest-path to Kubelet, it runs etcd as Static POD.
For debugging, logs of Kubelet can be checked using below command
journalctl -u kubelet -r
Hope it helps. Thanks.
I have followed the steps at https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html to launch a multi-node Kubernetes cluster using Vagrant and CoreOS.
But,I could not find a way to set an insecure docker registry for that environment.
To be more specific, when I run
kubectl run api4docker --image=myhost:5000/api4docker:latest --replicas=2 --port=8080
on this set up, it tries to get the image thinking it is a secure registry. But, it is an insecure one.
I appreciate any suggestions.
This is how I solved the issue for now. I will add later if I can automate it on Vagrantfile.
cd ./coreos-kubernetes/multi-node/vagrant
vagrant ssh w1 (and repeat these steps for w2, w3, etc.)
cd /etc/systemd/system/docker.service.d
sudo vi 50-insecure-registry.conf
add below line to this file
[Service]
Environment=DOCKER_OPTS='--insecure-registry="<your-registry-host>/24"'
after adding this file, we need to restart the docker service on this worker.
sudo systemctl stop docker
sudo systemctl daemon-reload
sudo systemctl start docker
sudo systemctl status docker
now, docker pull should work on this worker.
docker pull <your-registry-host>:5000/api4docker
Let's try to deploy our application on Kubernetes cluster one more time.
Logout from the workers and come back to your host.
$ kubectl run api4docker --image=<your-registry-host>:5000/api4docker:latest --replicas=2 --port=8080 —env="SPRING_PROFILES_ACTIVE=production"
when you get the pods, you should see the status running.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
api4docker-2839975483-9muv5 1/1 Running 0 8s
api4docker-2839975483-lbiny 1/1 Running 0 8s
I have a setup with Mesos and Aurora, I have dockerized my application which I need to deploy, now i have to start mesos slave with the docker support, but I'm not able to start the mesos slave with docker support, I'm trying the following:
sudo service mesos-slave --containerizers=docker,mesos start
this gives me
mesos-slave: unrecognized service
but if I try :
sudo service mesos-slave start
the slave gets activated.
Can anyone let me know how to solve this issue.
You should also inform people about what OS you're using, otherwise it's mostly guesswork.
Normally, your /etc/mesos-slave/containerizers should contain the following to enable Docker support:
docker,mesos
Then, you'd have to restart the service:
sudo service mesos-slave restart
References:
https://open.mesosphere.com/getting-started/install/#slave-setup
https://mesosphere.github.io/marathon/docs/native-docker.html
https://open.mesosphere.com/advanced-course/deploying-a-web-app-using-docker/