I'm currently learning Istio and ran into a problem while trying to setup set up HTTPS traffic between containers on a local Minikube test cluster.
There are some related tasks in the Istio docs. Particularly the https-overlay task. I managed to execute the first subtask (without sidecar on the querying container), but the second one fails with the following error when issuing a request to the nginx deployment from the sleep pod:
$ kubectl exec $(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) -c sleep -- curl https://my-nginx -k
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (35) error:1401E410:SSL routines:CONNECT_CR_FINISHED:sslv3 alert handshake failure
command terminated with exit code 35
I assume my setup is at fault but I can't really tell what's wrong (or how to debug). Can someone have a look and suggest a fix?
Setup steps are given below under Setup. Environment specs are shown below.
Environment
OS
macOS 10.13.6
kubectl version
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.3", GitCommit:"435f92c719f279a3a67808c80521ea17d5715c66", GitTreeState:"clean", BuildDate:"2018-11-27T01:15:02Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
minikube version:
$ minikube version
minikube version: v0.31.0
Setup
$ISTIO_HOME should point to the location of the Istio installation.
1. Run Minikube + Istio Startup Script
Warning: This will stop and delete a running minikube instance.
#! /bin/bash
minikube stop;
minikube delete;
minikube start \
--memory=8192 \
--cpus=4 \
--kubernetes-version=v1.10.0\
--vm-driver=virtualbox;
kubectl apply -f $ISTIO_HOME/install/kubernetes/helm/istio/templates/crds.yaml
kubectl apply -f $ISTIO_HOME/install/kubernetes/istio-demo.yaml
# Set docker registry to minikube registry
eval $(minikube docker-env);
2. Run HTTPS Test Setup Script
Wait until the Istio pods are either running or completed and then
#! /bin/bash
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/nginx.key -out /tmp/nginx.crt -subj "/CN=my-nginx/O=my-nginx"
kubectl create secret tls nginxsecret --key /tmp/nginx.key --cert /tmp/nginx.crt
kubectl create configmap nginxconfigmap --from-file=$ISTIO_HOME/samples/https/default.conf
kubectl apply -f <(istioctl kube-inject -f $ISTIO_HOME/samples/https/nginx-app.yaml)
kubectl apply -f <(istioctl kube-inject -f $ISTIO_HOME/samples/sleep/sleep.yaml)
3. Issue Request To Nginx From Sleep Pod
Wait for both pods to start running and then run
kubectl exec $(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) -c sleep -- curl https://my-nginx -k
According to the subtask description, we should see the nginx landing page, but instead the error shown above (exit code 35, handshake failure) is returned.
It seems that you are hitting this bug https://github.com/istio/istio/issues/7844 which is still open
Related
I'm playing around with microk8s and I simply want to run an apache server and navigate to its default page on the same machine. I'm on a mac arm m1:
microk8s kubectl run test-pod --image=ubuntu/apache2:2.4-20.04_beta --port=80
~ $ microk8s kubectl get pods 2
NAME READY STATUS RESTARTS AGE
test-pod 1/1 Running 0 8m43s
then I try to enable the forward:
◼ ~ $ microk8s kubectl port-forward test-pod :80
Forwarding from 127.0.0.1:37551 -> 80
but:
◼ ~ $ wget http://localhost:37551
--2022-12-24 18:54:37-- http://localhost:37551/
Resolving localhost (localhost)... 127.0.0.1, ::1
Connecting to localhost (localhost)|127.0.0.1|:8080... failed: Connection refused.
Connecting to localhost (localhost)|::1|:8080... failed: Connection refused.
the logs looks ok:
◼ ~ $ microk8s kubectl logs test-pod 130
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.1.254.96. Set the 'ServerName' directive globally to suppress this message
dashboard proxy does work fine and I can navigate to it:
◼ ~ $ microk8s dashboard-proxy
Checking if Dashboard is running.
Dashboard will be available at https://192.168.64.2:10443
Answering myself:
I should use the Multipass' guest machine assigned IP. This is not docker :)
For some reason I haven't figured out, as asked here, the forwarding from the guest does not work properly on mac. I should open a guest's shell and forward from there. that way, it will work. See the answer on the linked post.
Hope this will spare some time on future mac users.
I have setup and installed IBM Cloud private CE with two ubuntu images in Virtual Box. I can ssh into both images and from there ssh into the others. The ICp dashboard shows only one active node I was expecting two.
I explicitly ran the command (from a root user on master node):
docker run -e LICENSE=accept --net=host \
-v "$(pwd)":/installer/cluster \
ibmcom/cfc-installer install -l \
192.168.27.101
The result of this command seemed to be a successful addition of the worker node:
PLAY RECAP *********************************************************************
192.168.27.101 : ok=45 changed=11 unreachable=0 failed=0
But still the worker node isn't showing in the dashboard.
What should I be checking to ensure the worker node will work for the master node?
If you're using Vagrant to configure IBM Cloud Private, I'd highly recommend trying https://github.com/IBM/deploy-ibm-cloud-private
The project will use a Vagrantfile to configure a master/proxy and then provision 2 workers within the image using LXD. You'll get better density and performance on your laptop with this configuration over running two full Virtual Box images (1 for master/proxy, 1 for the worker).
You can check on your worker node with following steps:
check cluster nodes status
kubectl get nodes to check status of the newly added worker node
if it's NotReady, check kubelet log if there is error message about why kubelet is not running properly:
ICp 2.1
systemctl status kubelet
ICp 1.2
docker ps -a|grep kubelet to get kubelet_containerid,
docker logs kubelet_containerid
Run this to get the kubectl working
ln -sf /opt/kubernetes/hyperkube /usr/local/bin/kubectl
run the below command to identified failed pods if any in the setup on the master node.
Run this to get the pods details running in the environment
kubectl -n kube-system get pods -o wide
for restarting any failed pods of icp
txt="0/";ns="kube-system";type="pods"; kubectl -n $ns get $type | grep "$txt" | awk '{ print $1 }' | xargs kubectl -n $ns delete $type
now run the kubectl cluster-info
kubectl get nodes
Then ckeck the cluster info command of kubectl
Check kubectl version is giving you https://localhost:8080 or https://masternodeip:8001
kubectl cluster-info
Do you get the output
if no..
then
login to https://masternodeip:8443 using admin login
and then copy the configure clientcli settings by clicking on admin on the panel
paste it in ur master node.
and run the
kubectl cluster-info
I have a Ruby on Rails environment, and I'm converting it to run in Docker. This is largely because the development machine is a Windows laptop and the server is not. I have the Docker container mainly up and running, and now I want to connect the RubyMine debugger. To accomplish this the recommendation is to setup an SSH server in the container.
https://intellij-support.jetbrains.com/hc/en-us/community/posts/207649545-Use-RubyMine-and-Docker-for-development-run-and-debug-before-deployment-for-testing-
I successfully added SSHD to the container using the dockerfile lines from https://docs.docker.com/engine/examples/running_ssh_service/#build-an-egsshd-image minus the EXPOSE 22 (because it wasn't working with the port mapping in the docker-compose.yml). But the port is not accessible on the local machine
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6652389d248c civilservice_web "bundle exec rails..." 16 minutes ago Up 16 minutes 0.0.0.0:3000->3000/tcp, 0.0.0.0:3022->22/tcp civilservice_web_1
When I try to point PUTTY at localhost and 3022, it says that the server unexpectedly closed the connection.
What am I missing here?
This is my dockerfile
FROM ruby:2.2
RUN apt-get update && apt-get install -y \
build-essential \
libpq-dev \
nodejs \
openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
CMD ["/usr/sbin/sshd", "-D"]
RUN mkdir /MyApp
WORKDIR /MyApp
ADD Gemfile /MyApp/Gemfile
ADD Gemfile.lock /MyApp/Gemfile.lock
RUN bundle install
ADD . /MyApp
and this is my docker-compose.yml
version: '2'
services:
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/CivilService
ports:
- "3000:3000"
- "3022:22"
DOCKER_HOST doesn't appear to be an environment variable
docker version outputs the following
Client:
Version: 17.03.0-ce
API version: 1.26
Go version: go1.7.5
Git commit: 60ccb22
Built: Thu Feb 23 10:40:59 2017
OS/Arch: windows/amd64
Server:
Version: 17.03.0-ce
API version: 1.26 (minimum version 1.12)
Go version: go1.7.5
Git commit: 3a232c8
Built: Tue Feb 28 07:52:04 2017
OS/Arch: linux/amd64
Experimental: true
docker run -it --rm --net container:civilservice_web_1 busybox netstat -lnt outputs
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.11:35455 0.0.0.0:* LISTEN
SSHD is now running along side the Rails app, but the recipe that I was working from for setting up the service is not correct for the flavor of Linux that came with my base image https://docs.docker.com/engine/examples/running_ssh_service/#build-an-egsshd-image
The image I'm using is based on Debian 8. Could someone point me at where the example breaks down?
Your sshd process isn't running. That's visible in the netstat output:
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.11:35455 0.0.0.0:* LISTEN
But as user2105103 points out, I should have realized that if I compared your docker-compose.yml with the Dockerfile. You define the sshd command in the image with a Dockerfile line:
CMD ["/usr/sbin/sshd", "-D"]
But then you override your image setting when running the container with the docker-compose command:
command: bundle exec rails s -p 3000 -b '0.0.0.0'
So, the only thing run, as you can see in the netstat, is the rails app listening on 3000. If you need multiple commands to run, then you can docker exec to kick off the second command (not recommended for a second service like this), use a command that launches sshd in the background and rails in the foreground (fairly ugly), or you can consider something like supervisord.
Personally, I'd skip sshd and just use docker exec -it civilservice_web_1 /bin/bash to get a prompt inside the container when you need it.
Hoping someone can tell me what I am missing? This is a ruby app using webrick and I am trying to containerize the app. Running on Mac OSX 10.12.3 Sierra. Here is my Dockerfile
FROM ruby:2.4.0-alpine
RUN apk add --no-cache gcc musl-dev libstdc++ g++ make
RUN gem install jekyll bundler redcarpet
RUN mkdir -p /usr/app/jekyll
COPY . /usr/app/jekyll
WORKDIR /usr/app/jekyll
EXPOSE 4000:4000
CMD ["jekyll", "serve"]
Here is how the image is built
docker build -t chb0docker/cheat .
if I run the service directly on the host, it runs fine
Violas-MacBook-Pro:progfun-wiki cbongiorno$ jekyll serve &
[1] 49286
Violas-MacBook-Pro:progfun-wiki cbongiorno$ Configuration file: /Users/cbongiorno/development/progfun-wiki/_config.yml
Configuration file: /Users/cbongiorno/development/progfun-wiki/_config.yml
Source: /Users/cbongiorno/development/progfun-wiki
Destination: /Users/cbongiorno/development/progfun-wiki/_site
Incremental build: disabled. Enable with --incremental
Generating...
done in 0.409 seconds.
Auto-regeneration: enabled for '/Users/cbongiorno/development/progfun-wiki'
Configuration file: /Users/cbongiorno/development/progfun-wiki/_config.yml
Server address: http://127.0.0.1:4000/
Server running... press ctrl-c to stop.
verify the server is up:
Violas-MacBook-Pro:progfun-wiki cbongiorno$ curl localhost:4000 2> /dev/null | wc -l
40
now run it in docker:
Violas-MacBook-Pro:progfun-wiki cbongiorno$ fg
jekyll serve
^C
Violas-MacBook-Pro:progfun-wiki cbongiorno$ docker run -d --name jekyll -p 4000:4000 chb0docker/cheat
e766e4acb007033583885b1a3c52dc3c2dc51c6929c8466f3a4ff958f76ebc5f
verify the process
Violas-MacBook-Pro:progfun-wiki cbongiorno$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e766e4acb007 chb0docker/cheat "jekyll serve" 19 minutes ago Up 32 seconds 0.0.0.0:4000->4000/tcp jekyll
Violas-MacBook-Pro:progfun-wiki cbongiorno$ docker inspect --format '{{ .NetworkSettings.IPAddress }}' jekyll
172.17.0.3
Violas-MacBook-Pro:progfun-wiki cbongiorno$ docker logs jekyll
Configuration file: /usr/app/jekyll/_config.yml
Configuration file: /usr/app/jekyll/_config.yml
Source: /usr/app/jekyll
Destination: /usr/app/jekyll/_site
Incremental build: disabled. Enable with --incremental
Generating...
done in 0.329 seconds.
Auto-regeneration: enabled for '/usr/app/jekyll'
Configuration file: /usr/app/jekyll/_config.yml
Server address: http://127.0.0.1:4000/
Server running... press ctrl-c to stop.
now try and get some data
Violas-MacBook-Pro:progfun-wiki cbongiorno$ curl localhost:4000 2> /dev/null | wc -l
0
Violas-MacBook-Pro:progfun-wiki cbongiorno$ curl 172.17.0.3:4000 2> /dev/null | wc -l
0
Violas-MacBook-Pro:progfun-wiki cbongiorno$ curl 0.0.0.0:4000 2> /dev/null | wc -l
0
but, if we execute the above GET (with wget instead of curl because it's not installed on this container) we can see all is well inside the container
docker exec -it jekyll /usr/bin/wget -q -O - localhost:4000 | wc -l
40
Is this an app issue?
Looks like Jekyll is binding to localhost. Either start the container like this:
docker run -d --name jekyll -p 127.0.0.1:4000:4000
Or have Jekyll bind to 0.0.0.0:
CMD ["jekyll", "serve", "--host", "0.0.0.0"]
I installed minikube as instructed here https://github.com/kubernetes/minikube/releases
and started with with a simple minikube start command.
But the next step, which is as simple as kubectl get pods --all-namespaces fails with
Unable to connect to the server: dial tcp 192.168.99.100:8443: i/o timeout
What did I miss?
I ran into the same issue using my Mac and basically I uninstalled both minikube and Kubectl and installed it as follows:
Installed Minikube.
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.8.0/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
Installed Kubectl.
curl -Lo kubectl http://storage.googleapis.com/kubernetes-release/release/v1.3.0/bin/darwin/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
Start a cluster, run the command:
minikube start
Minikube will also create a “minikube” context, and set it to default in kubectl. To switch back to this context later, run this command:
kubectl config use-context minikube
Now to get the list of all pods run the command:
kubectl get pods --all-namespaces
Now you should be able to get the list of pods. Also make sure that you don't have a firewall within your network that blocks the connections.
I faced a similar issue on win7 when changed work environment, as you said it is working fine at home but not working at office, high chance it caused by firewall policy, cannot pass TLS verification.
Instead of waste time on troubleshoot(sometimes nothing to do if you cannot turn off firewall), if you just want to test local minikube cluster, would suggest to disable TLS verification.
This is what I have done:
# How to disable minikube TLS verification
## disable TLS verification
$ VBoxManage controlvm minikube natpf1 k8s-apiserver,tcp,127.0.0.1,8443,,8443
$ VBoxManage controlvm minikube natpf1 k8s-dashboard,tcp,127.0.0.1,30000,,30000
$ kubectl config set-cluster minikube-vpn --server=https://127.0.0.1:8443 --insecure-skip-tls-verify
$ kubectl config set-context minikube-vpn --cluster=minikube-vpn --user=minikube
$ kubectl config use-context minikube-vpn
## test kubectl
$ kubectl get pods
## enable local docker client
$ VBoxManage controlvm minikube natpf1 k8s-docker,tcp,127.0.0.1,2374,,2376
$ eval $(minikube docker-env)
$ unset DOCKER_TLS_VERIFY
$ export DOCKER_HOST="tcp://127.0.0.1:2374"
$ alias docker='docker --tls'
## test local docker client
$ docker ps
## test minikube dashboard
curl http://127.0.0.1:30000
Also I make a small script for this for your reference.
Hope it is helpful for you.
You need to just restart minikube. Sometimes I have this problem when my computer has been off for a while. I don't think you need to reinstall anything.
First verify you are in the correct context
$ kubectl config current-context
minikube
Check Minikube status (status should show "Running", mine below showed "Saved")
$ minikube status
minikube: Saved
cluster:
kubectl:
Restart minikube
$ minikube start
Starting local Kubernetes v1.8.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Verify it is running (This is what you should see)
$ minikube status
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100
I had this issue when connected to Cisco AnyConnect VPN. Once I disconnected, minikube ran fine. Discussion on github here: https://github.com/kubernetes/minikube/issues/4540