I am trying to setup a simple POC of Apache OpenWhisk serverless framework running on Kubernetes. I am using MacOS with Minikube. Here are the specs:
Kubernetes: v1.20.2
Minikube: v1.17.0
Docker: 20.10.0-rc1, 4.26GB allocated
Here are the setup steps for Minikube:
$ minikube start --cpus 2 --memory 4096 --kubernetes-version=v1.20.2
$ minikube ssh -- sudo ip link set docker0 promisc on
$ kubectl create namespace openwhisk
$ kubectl label nodes --all openwhisk-role=invoker
Install OpenWhisk using Helm:
$ helm install owdev ./helm/openwhisk -n openwhisk --create-namespace -f mycluster.yaml
Configure Whisk CLI:
$ wsk property set --apihost 192.168.49.2:31001
$ wsk property set --auth 23bc46b1-71f6-4ed5-8c54-816aa4f8c502:123zO3xZCLrMN6v2BKK1dXYFpXlPkccOFqm12CdAsMgRU4VrNZ9lyGVCGuMDGIwP
The 192.168.49.2 IP address of Minikube was confirmed by typing:
$ minikube ip
Here is my mycluster.yaml file:
whisk:
ingress:
type: NodePort
apiHostName: 192.168.49.2
apiHostPort: 31001
nginx:
httpsNodePort: 31001
I checked the health of my OpenWhisk setup:
$ kubectl get pods -n openwhisk
NAME READY STATUS RESTARTS AGE
owdev-alarmprovider-5b86cb64ff-q86nj 1/1 Running 0 137m
owdev-apigateway-bccbbcd67-7q2r8 1/1 Running 0 137m
owdev-controller-0 1/1 Running 13 137m
owdev-couchdb-584676b956-7pxtc 1/1 Running 0 137m
owdev-gen-certs-7227t 0/1 Completed 0 137m
owdev-init-couchdb-g6vhb 0/1 Completed 0 137m
owdev-install-packages-sg2f4 1/1 Running 0 137m
owdev-invoker-0 1/1 Running 1 137m
owdev-kafka-0 1/1 Running 0 137m
owdev-kafkaprovider-5574d4bf5f-vvdb9 1/1 Running 0 137m
owdev-nginx-86749d59cb-mxxrt 1/1 Running 0 137m
owdev-redis-d65649c5b-vd8d4 1/1 Running 0 137m
owdev-wskadmin 1/1 Running 0 137m
owdev-zookeeper-0 1/1 Running 0 137m
wskowdev-invoker-00-13-prewarm-nodejs10 1/1 Running 0 116m
wskowdev-invoker-00-14-prewarm-nodejs10 1/1 Running 0 116m
wskowdev-invoker-00-15-whisksystem-invokerhealthtestaction0 1/1 Running 0 112m
Finally, I created a simple hello world action following these instructions taken directly from the OpenWhisk documentation. When I try to test the action, I get a network timeout:
$ wsk action create helloJS hello.js
error: Unable to create action 'helloJS': Put "https://192.168.49.2:31001/api/v1/namespaces/_/actions/helloJS?overwrite=false": dial tcp 192.168.49.2:31001: i/o timeout
I tried turning on debug mode with the -d switch, but could not make much of what feedback I am seeing.
My feeling is that there is either a bug at work here, or perhaps Minikube on Mac was never intended to be fully supported on OpenWhisk.
Can anyone suggest what I might try to get this setup and action working?
We stopped maintaining OpenWhisk for Minikube a while ago. With the availability of a full-fledged Kubernetes cluster built-in to Docker Desktop on MacOS and Windows and kind (https://kind.sigs.k8s.io) being available on all of our platforms supporting Minikube was more work than it was worth.
Wait until the pod (starting with the name owdev-install-packages-) packages completes.
This may take some time, after that it should work.
Related
I have tried all sorts of things to get OpenEBS Mayastor clustered storage to work on microk8s without much success. So rather than give up completely I thought I would detail one of my failed attempts and see if anyone could figure out what I am doing wrong. Thanks in advance for any help you can give me :-)
Failed Attempt
Here is the results of following the steps posted on at https://microk8s.io/docs/addon-mayastor.
VM Setup:
3 VM running Ubuntu 22.04 with 16GB ram on a vSphere hypervisor. I have used these same VM to create a 3 node microk8s cluster with good success in the past.
Microk8s removal:
removed microk8s on all 3 nodes.
microk8s stop
sudo snap remove microk8s --purge
sudo reboot
Microk8s fresh install:
https://microk8s.io/docs/setting-snap-channel
snap info microk8s
latest/stable: v1.26.0 2022-12-17 (4390) 176MB classic
On all 3 nodes:
sudo snap install microk8s --classic --channel=1.26/stable
sudo usermod -a -G microk8s $USER
sudo chown -f -R $USER ~/.kube
newgrp microk8s
sudo reboot
verify everything is ok
microk8s status
microk8s inspect
**Do what inspect tells you to do:**
WARNING: IPtables FORWARD policy is DROP. Consider enabling traffic forwarding with: sudo iptables -P FORWARD ACCEPT
The change can be made persistent with: sudo apt-get install iptables-persistent
sudo iptables -S
sudo iptables-legacy -S
sudo iptables -P FORWARD ACCEPT
sudo apt-get install iptables-persistent
sudo systemctl is-enabled netfilter-persistent.service
sudo reboot
microk8s inspect
still get the IPtable FORWARD warning on 2 of the 3 nodes.
hopefully it is not that important.
ping all the ip addresses in cluster from every node.
Followed the directions at https://microk8s.io/docs/addon-mayastor
step 1:
sudo sysctl vm.nr_hugepages=1024
echo 'vm.nr_hugepages=1024' | sudo tee -a /etc/sysctl.conf
sudo nvim /etc/sysctl.conf
step 2:
sudo apt install linux-modules-extra-$(uname -r)
sudo modprobe nvme_tcp
echo 'nvme-tcp' | sudo tee -a /etc/modules-load.d/microk8s-mayastor.conf
sudo nvim /etc/modules-load.d/microk8s-mayastor.conf
step 3:
microk8s enable dns
microk8s enable helm3
thought we might need rbac so I enabled that also.
microk8s enable rbac
Created 3 node cluster.
from main node.
sudo microk8s add-node
go to 2nd node.
microk8s join 10.1.0.116:25000/0c902af525c13fbfb5e7c37cff29b29a/acf13be17a96
from main node.
sudo microk8s add-node
go to 3rd node.
microk8s join 10.1.0.116:25000/36134181872079c649bed48d969a006d/acf13be17a96
microk8s status
enable the mayastor add-on:
from main node.
sudo microk8s enable core/mayastor --default-pool-size 20G
go to 2nd node.
sudo microk8s enable core/mayastor --default-pool-size 20G
Addon core/mayastor is already enabled
go to 3rd node.
sudo microk8s enable core/mayastor --default-pool-size 20G
Addon core/mayastor is already enabled
Wait for the mayastor control plane and data plane pods to come up:
sudo microk8s.kubectl get pod -n mayastor
NAME READY STATUS RESTARTS AGE
mayastor-csi-962jf 0/2 ContainerCreating 0 2m6s
mayastor-csi-l4zxx 0/2 ContainerCreating 0 2m5s
mayastor-8pcc4 0/1 Init:0/3 0 2m6s
msp-operator-74ff9cf5d5-jvxqb 0/1 Init:0/2 0 2m5s
mayastor-lt8qq 0/1 Init:0/3 0 2m5s
etcd-operator-mayastor-65f9967f5-mpkrw 0/1 ContainerCreating 0 2m5s
mayastor-csi-6wb7x 0/2 ContainerCreating 0 2m5s
core-agents-55d76bb877-8nffd 0/1 Init:0/1 0 2m5s
csi-controller-54ccfcfbcc-m94b7 0/3 Init:0/1 0 2m5s
mayastor-9q4gl 0/1 Init:0/3 0 2m5s
rest-77d69fb479-qsvng 0/1 Init:0/2 0 2m5s
# Still waiting
sudo microk8s.kubectl get pod -n mayastor
NAME READY STATUS RESTARTS AGE
mayastor-8pcc4 0/1 Init:0/3 0 32m
msp-operator-74ff9cf5d5-jvxqb 0/1 Init:0/2 0 32m
mayastor-lt8qq 0/1 Init:0/3 0 32m
core-agents-55d76bb877-8nffd 0/1 Init:0/1 0 32m
csi-controller-54ccfcfbcc-m94b7 0/3 Init:0/1 0 32m
mayastor-9q4gl 0/1 Init:0/3 0 32m
rest-77d69fb479-qsvng 0/1 Init:0/2 0 32m
mayastor-csi-962jf 2/2 Running 0 32m
mayastor-csi-l4zxx 2/2 Running 0 32m
etcd-operator-mayastor-65f9967f5-mpkrw 1/1 Running 1 32m
mayastor-csi-6wb7x 2/2 Running 0 32m
etcd-6tjf7zb9dh 0/1 Init:0/1 0 30m
Went to the trouble-shooting section at https://microk8s.io/docs/addon-mayastor
microk8s.kubectl logs -n mayastor daemonset/mayastor
output was:
Found 3 pods, using pod/mayastor-8pcc4
Defaulted container "mayastor" out of: mayastor, registration-probe (init), etcd-probe (init), initialize-pool (init)
Error from server (BadRequest): container "mayastor" in pod "mayastor-8pcc4" is waiting to start: PodInitializing
I am trying to setup aerokube moon on a linux machine(Ubuntu 16.04) .
Steps followed :
minikube installed and ingress is enabled.
moon installed using https://aerokube.com/moon/latest/#install-helm .
started minikube using docker driver
$ minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
kubectl get pods -n moon
NAME READY STATUS RESTARTS AGE
moon-5f6fd5f9fd-7b945 3/3 Running 0 10m
moon-5f6fd5f9fd-fcct6 3/3 Running 0 10m
$minikube tunnel
Status:
machine: minikube
pid: 148130
route: 10.96.0.0/12 -> xxx.xxx.xx.x
minikube: Running
services: []
errors:
minikube: no errors
router: no errors
loadbalancer emulator: no errors
$
cat /etc/hosts
127.0.0.1 localhost moon.aerokube.local
xxx.xxx.xxx.xxx moon.aerokube.local --> this ip is output of `$minikube ip`
Issue 1 : when I try to access http://moon.aerokube.local/ .I get
Issue2 how to change the default port for selenium
I would like to change the default port for selenium in moon as my 8080 and 4444 port are already occupied.
I would like to use some other port for moon ui and /wd/hub
I assume ,this probably will be accessible from that linux machine itself(I can't check directly on that machine) as it is pointed to localhost in /etc/host. but I dont know how to make it accessible from other places (facing issues no 2 mentioned in this post) like from our laptops for people working on this project .
Please help
I created a k8s cluster on AWS, using kubeadm, with 1 master and 1 worker following the guide available here.
Then, I started 1 ElasticSearch container:
kubectl run elastic --image=elasticsearch:2 --replicas=1
And it was deployed successfully on worker. Then, I try to expose it as a service on cluster:
kubectl expose deploy/elastic --port 9200
And it was exposed successfully:
NAMESPACE NAME READY STATUS RESTARTS AGE
default elastic-664569cb68-flrrz 1/1 Running 0 16m
kube-system etcd-ip-172-31-140-179.ec2.internal 1/1 Running 0 16m
kube-system kube-apiserver-ip-172-31-140-179.ec2.internal 1/1 Running 0 16m
kube-system kube-controller-manager-ip-172-31-140-179.ec2.internal 1/1 Running 0 16m
kube-system kube-dns-86f4d74b45-mc24s 3/3 Running 0 17m
kube-system kube-flannel-ds-fjkkc 1/1 Running 0 16m
kube-system kube-flannel-ds-zw4pq 1/1 Running 0 17m
kube-system kube-proxy-4c8lh 1/1 Running 0 17m
kube-system kube-proxy-zkfwn 1/1 Running 0 16m
kube-system kube-scheduler-ip-172-31-140-179.ec2.internal 1/1 Running 0 16m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default elastic ClusterIP 10.96.141.188 <none> 9200/TCP 16m
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 17m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system kube-flannel-ds 2 2 2 2 2 beta.kubernetes.io/arch=amd64 17m
kube-system kube-proxy 2 2 2 2 2 <none> 17m
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
default elastic 1 1 1 1 16m
kube-system kube-dns 1 1 1 1 17m
NAMESPACE NAME DESIRED CURRENT READY AGE
default elastic-664569cb68 1 1 1 16m
kube-system kube-dns-86f4d74b45 1 1 1 17m
But, when I try to execute a curl to http://10.96.141.188:9200 (from the master node) I'm getting a timeout, and everything indicates that the generated cluster IP is not reachable from the master node. It's working only on worker node.
I tried everything I could found:
Add a bunch of rules to iptables
iptables -P FORWARD ACCEPT
iptables -I FORWARD 1 -i cni0 -j ACCEPT -m comment --comment "flannel subnet"
iptables -I FORWARD 1 -o cni0 -j ACCEPT -m comment --comment "flannel subnet"
iptables -t nat -A POSTROUTING -s 10.244.0.0/16 ! -d 10.244.0.0/16 -j MASQUERADE
Disable firewalld
Enable all ports on ec2 security policy (from everywhere)
Use different docker versions (1.13.1, 17.03, 17.06, 17.12)
Different k8s versions (1.9.0 ~1.9.6)
Differents CNI (flannel and weave)
Add some parameters to kubeadm init command (--node-name with FQDN and --apiserver-advertise-address with public master IP)
But none of this worked. It appears that is a specific issue on AWS, since the tutorial guide works fine on Linux Academy Cloud Server.
Is there anything else I could try?
Obs:
Currently, I'm using docker 1.13 and k8s 1.9.6 (with flannel 0.9.1) on Centos7.
I finally found the problem. According to this page, Flannel needs to open ports UDP 8285 and 8472 on both Master and Worker node. It's interesting that this is not mentioned at official kubeadm documentation.
kubectl run elastic --image=elasticsearch:2 --replicas=1
As best I can tell, you did not inform kubernetes that the elasticsearch:2 image listens on any port(s), which it will not infer by itself. You would have experienced the same problem if you had just run that image under docker without similarly specifying the --publish or --publish-all options.
Thus, when the ClusterIP attempts to forward traffic from port 9200 to the Pods matching its selector, those packets fall into /dev/null because the container is not listening for them.
Add a bunch of rules to iptables
Definitely don't do that; if you observed, there are already a ton of iptables rules that are managed by kube-proxy: in fact, its primary job in life is to own the iptables rules on the Node upon which it is running. Your rules only serve to confuse both kube-proxy as well as any person who follows along behind you, trying to work out where those random rules came from. If you haven't already made them permanent, then either undo them or just reboot the machine to flush those tables. Leaving your ad-hoc rules in place will 100% not make your troubleshooting process any easier.
The following containers are not starting after installing IBM Cloud Private. I had previously installed ICP without a Management node and was doing a new install after having done and 'uninstall' and did restart the Docker service on all nodes.
Installed a second time with a Management node defined, Master/Proxy on a single node, and two Worker nodes.
Selecting menu option Platform / Monitoring gets 502 Bad Gateway
Event messages from deployed containers
Deployment - monitoring-prometheus
TYPE SOURCE COUNT REASON MESSAGE
Warning default-scheduler 2113 FailedScheduling
No nodes are available that match all of the following predicates:: MatchNodeSelector (3), NoVolumeNodeConflict (4).
Deployment - monitoring-grafana
TYPE SOURCE COUNT REASON MESSAGE
Warning default-scheduler 2097 FailedScheduling
No nodes are available that match all of the following predicates:: MatchNodeSelector (3), NoVolumeNodeConflict (4).
Deployment - rootkit-annotator
TYPE SOURCE COUNT REASON MESSAGE
Normal kubelet 169.53.226.142 125 Pulled
Container image "ibmcom/rootkit-annotator:20171011" already present on machine
Normal kubelet 169.53.226.142 125 Created
Created container
Normal kubelet 169.53.226.142 125 Started
Started container
Warning kubelet 169.53.226.142 2770 BackOff
Back-off restarting failed container
Warning kubelet 169.53.226.142 2770 FailedSync
Error syncing pod
The management console sometimes displays a 502 Bad Gateway Error after installation or rebooting the master node. If you recently installed IBM Cloud Private, wait a few minutes and reload the page.
If you rebooted the master node, take the following steps:
Configure the kubectl command line interface. See Accessing your IBM Cloud Private cluster by using the kubectl CLI.
Obtain the IP addresses of the icp-ds pods. Run the following command:
kubectl get pods -o wide -n kube-system | grep "icp-ds"
The output resembles the following text:
icp-ds-0 1/1 Running 0 1d 10.1.231.171 10.10.25.134
In this example, 10.1.231.171 is the IP address of the pod.
In high availability (HA) environments, an icp-ds pod exists for each master node.
From the master node, ping the icp-ds pods. Check the IP address for each icp-ds pod by running the following command for each IP address:
ping 10.1.231.171
If the output resembles the following text, you must delete the pod:
connect: Invalid argument
Delete each pod that you cannot reach:
kubectl delete pods icp-ds-0 -n kube-system
In this example, icp-ds-0 is the name of the unresponsive pod.
In HA installations, you might have to delete the pod for each master node.
Obtain the IP address of the replacement pod or pods. Run the following command:
kubectl get pods -o wide -n kube-system | grep "icp-ds"
The output resembles the following text:
icp-ds-0 1/1 Running 0 1d 10.1.231.172 10.10.2
Ping the pods again. Check the IP address for each icp-ds pod by running the following command for each IP address:
ping 10.1.231.172
If you can reach all icp-ds pods, you can access the IBM Cloud Private management console when that pod enters the available state.
I have followed the steps at https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html to launch a multi-node Kubernetes cluster using Vagrant and CoreOS.
But,I could not find a way to set an insecure docker registry for that environment.
To be more specific, when I run
kubectl run api4docker --image=myhost:5000/api4docker:latest --replicas=2 --port=8080
on this set up, it tries to get the image thinking it is a secure registry. But, it is an insecure one.
I appreciate any suggestions.
This is how I solved the issue for now. I will add later if I can automate it on Vagrantfile.
cd ./coreos-kubernetes/multi-node/vagrant
vagrant ssh w1 (and repeat these steps for w2, w3, etc.)
cd /etc/systemd/system/docker.service.d
sudo vi 50-insecure-registry.conf
add below line to this file
[Service]
Environment=DOCKER_OPTS='--insecure-registry="<your-registry-host>/24"'
after adding this file, we need to restart the docker service on this worker.
sudo systemctl stop docker
sudo systemctl daemon-reload
sudo systemctl start docker
sudo systemctl status docker
now, docker pull should work on this worker.
docker pull <your-registry-host>:5000/api4docker
Let's try to deploy our application on Kubernetes cluster one more time.
Logout from the workers and come back to your host.
$ kubectl run api4docker --image=<your-registry-host>:5000/api4docker:latest --replicas=2 --port=8080 —env="SPRING_PROFILES_ACTIVE=production"
when you get the pods, you should see the status running.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
api4docker-2839975483-9muv5 1/1 Running 0 8s
api4docker-2839975483-lbiny 1/1 Running 0 8s