I am executing kubectl-proxy command via terraform for my EKS cluster
resource "null_resource" "start_kube_proxy" {
provisioner "local-exec" {
command = "nohup kubectl proxy --port=8088 &"
}
}
After running this via terraform it just went into asynchronous mode
null_resource.start_kube_proxy: Creating...
null_resource.start_kube_proxy: Provisioning with 'local-exec'...
null_resource.start_kube_proxy (local-exec): Executing: ["/bin/sh" "-c" "nohup kubectl proxy --port=8088 &"]
null_resource.start_kube_proxy (local-exec): Starting to serve on 127.0.0.1:8088
null_resource.start_kube_proxy: Still creating... [10s elapsed]
null_resource.start_kube_proxy: Still creating... [20s elapsed]
null_resource.start_kube_proxy: Still creating... [30s elapsed]
null_resource.start_kube_proxy: Still creating... [40s elapsed]
null_resource.start_kube_proxy: Still creating... [50s elapsed]
Any help how can I execute this kubectl-proxy command via terraform without going into asynchronous mode and can run this in background?
Related
I'm using Gitlab and Docker to get continuous integration to my spring boot application and I'm getting this error:
Cannot connect to the Docker daemon at tcp://xxx.xxx.xx.xxx:2375. Is the docker daemon running?
.development.env:
export SPRING_ACTIVE_PROFILE='development'
export DOCKER_REPO='DOCKER_HUB_ID/app_name:dev'
export APP_NAME='app_name_dev'
export PORT='8080'
export SERVER_IP='xxx.xxx.xx.xxx' #SERVER_IP
export SERVER_SSH_KEY="$DEV_SSH_PRIVATE_KEY"
export DOCKER_HOST='tcp://xxx.xxx.xx.xxx:2375' #SERVER_IP
.gitlab-ci.yml
services:
- docker:19.03.7-dind
stages:
- build and push docker image
docker build:
image: docker:stable
stage: build and push docker image
before_script:
- source .${CI_COMMIT_REF_NAME}.env #.development.env
script:
- docker build --build-arg SPRING_ACTIVE_PROFILE=$SPRING_ACTIVE_PROFILE -t $DOCKER_REPO .
- docker login -u $DOCKER_USER -p $DOCKER_PASSWORD docker.io
- docker push $DOCKER_REPO
This is the whole logs from gitlab:
Running with gitlab-runner 13.5.0 (ece86343)
on gitlab-server JuhWVkPJ
Preparing the "docker" executor
00:38
Using Docker executor with image docker:stable ...
Starting service docker:19.03.7-dind ...
Pulling docker image docker:19.03.7-dind ...
Using docker image sha256:14af3ba31e635475ec8f7fbe17470424514777621e627a91c41bbbe028dbae16 for docker:19.03.7-dind with digest docker#sha256:2683fcdf7480ea101415833f7793fb058c5f20227890a953b0a70bfc350af5bc ...
Waiting for services to be up and running...
*** WARNING: Service runner-juhwvkpj-project-13-concurrent-0-7c99eb8ace2e2ae6-docker-0 probably didn't start properly.
Health check error:
service "runner-juhwvkpj-project-13-concurrent-0-7c99eb8ace2e2ae6-docker-0-wait-for-service" timeout
Health check container logs:
Service container logs:
2020-12-30T03:14:07.879506461Z Generating RSA private key, 4096 bit long modulus (2 primes)
2020-12-30T03:14:08.459745140Z ..............................................++++
2020-12-30T03:14:08.673203110Z ..................++++
2020-12-30T03:14:08.673231544Z e is 65537 (0x010001)
2020-12-30T03:14:08.713960023Z Generating RSA private key, 4096 bit long modulus (2 primes)
2020-12-30T03:14:08.851463609Z ..............++++
2020-12-30T03:14:09.403244538Z .....................................................++++
2020-12-30T03:14:09.403286293Z e is 65537 (0x010001)
2020-12-30T03:14:09.516423752Z Signature ok
2020-12-30T03:14:09.516463300Z subject=CN = docker:dind server
2020-12-30T03:14:09.516471290Z Getting CA Private Key
2020-12-30T03:14:09.536975767Z /certs/server/cert.pem: OK
2020-12-30T03:14:09.553642146Z Generating RSA private key, 4096 bit long modulus (2 primes)
2020-12-30T03:14:09.927078677Z ...................................................++++
2020-12-30T03:14:10.107451624Z ...................++++
2020-12-30T03:14:10.108457646Z e is 65537 (0x010001)
2020-12-30T03:14:10.156096074Z Signature ok
2020-12-30T03:14:10.156125397Z subject=CN = docker:dind client
2020-12-30T03:14:10.156302268Z Getting CA Private Key
2020-12-30T03:14:10.178703934Z /certs/client/cert.pem: OK
2020-12-30T03:14:10.194290163Z mount: permission denied (are you root?)
2020-12-30T03:14:10.194438175Z Could not mount /sys/kernel/security.
2020-12-30T03:14:10.194456604Z AppArmor detection and --privileged mode might break.
2020-12-30T03:14:10.195933829Z mount: permission denied (are you root?)
*********
Pulling docker image docker:stable ...
Using docker image sha256:b0757c55a1fdbb59c378fd34dde3e12bd25f68094dd69546cf5ca00ddbaa7a33 for docker:stable with digest docker#sha256:fd4d028713fd05a1fb896412805daed82c4a0cc84331d8dad00cb596d7ce3e3a ...
Preparing environment
00:01
Running on runner-juhwvkpj-project-13-concurrent-0 via gitlab-server...
Getting source from Git repository
00:03
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in /builds/abdallah/harvis/.git/
Checking out 5568bbc9 as DM_Module...
Skipping Git submodules setup
Executing "step_script" stage of the job script
00:36
$ source .${CI_COMMIT_REF_NAME}.env
$ whoami
root
$ docker build --build-arg SPRING_ACTIVE_PROFILE=$SPRING_ACTIVE_PROFILE -t $DOCKER_REPO .
Cannot connect to the Docker daemon at tcp://xxx.xxx.xx.xxx:2375. Is the docker daemon running?
Cleaning up file based variables
00:01
ERROR: Job failed: exit code 1
I think the error is in DOCKER_HOST, what should I assign it?
I'd appreciate any help or suggestion.
The Docker daemon can listen for Docker Engine API requests via three different types of Socket: unix, tcp, and fd
your docker daemon is trying to connect via tcp (2375), and it seems not enabled.
you need to start docker daemon with -H tcp://<ip>:2375
or put it in /etc/docker/daemon.json
"hosts": ["tcp://<ip>:2375", "unix:///var/run/docker.sock"],
EDIT
Binding to 0.0.0.0 is dangerous as David pointed out.
I am trying to start minikube cluster on my macOS but i get always "Permission denied"
(base) MacBook-Pro-de-..:desktop ..$ minikube start
-bash: /usr/local/bin/minikube: Permission denied
What i should do ?
Execute following commands to add permissions to files:
$ chmod ugo+rwx ~/.kube/config
$ sudo chown -R $USER ~/.kube
$ chmod +x your-minikube-localization
Configure proxy:
export no_proxy=$no_proxy,$(minikube ip)
export NO_PROXY=$no_proxy,$(minikube ip)
Then run minikube command taking proxy under consideration (IPs set below are just example):
$ minikube start --alsologtostderr --kubernetes-version v1.13.1 --docker-env HTTP_PROXY=http://10.0.2.2:1087 --docker-env HTTPS_PROXY=http://10.0.2.2:1087 --docker-env NO_PROXY=10.0.2.2,192.168.99.100
$ minikube start --alsologtostderr --kubernetes-version v1.13.2 --docker-env HTTP_PROXY=http://10.0.2.2:3128 --docker-env HTTPS_PROXY=http://10.0.2.2:3128 --docker-env NO_PROXY=10.0.2.2,192.168.99.100
In this case proxy configuration:
HTTP_PROXY=http://127.0.0.1:3128
Please must remember to add your minikube IP to NO_PROXY.
Similar problems you can find here: file-permission, kubeconfig.
I got now this Error:
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
E0301 15:19:14.198136 48335 start.go:268] Error setting up kubeconfig: Error reading file "/../.kube/config": open /../.kube/config: not a directory
E0301 15:19:15.128758 48335 util.go:151] Error uploading error message: Error sending error report to https://clouderrorreporting.googleapis.com/v1beta1/projects/k8s-minikube/events:report?key=AIzaSyACUwzG0dEPcl-eOgpDKnyKoUFgHdfoFuA, got response code 400
I'm trying to run a Docker image from a Jenkins pipeline but I am getting this common permission error:
[devops-lab] Running shell script
+ docker run -d --mount source=devops-lab,target=/root/data pmaugeri/alpine-akamai-cli
docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.37/containers/create: dial unix /var/run/docker.sock: connect: permission denied.
See 'docker run --help'.
I observed that this socket is owned by daemon group:
$ ls -la /var/run/docker.sock
lrwxrwxrwx 1 root daemon 61 May 24 15:50 /var/run/docker.sock -> /Users/pmaugeri/Library/Containers/com.docker.docker/Data/s60
Note that there is no docker group on my system.
So I did try to add the jenkins user in the daemon group but it didn't solve this issue.
Here is the definition of my pipeline:
node {
sh 'env > env.txt'
for (String i : readFile('env.txt').split("\r?\n")) {
println i
}
sh 'whoami'
docker.image('pmaugeri/alpine-akamai-cli').withRun('--mount source=devops-lab,target=/root/data') {
}
}
and here is the jenkins console output:
...
[devops-lab] Running shell script
+ whoami
jenkins
[Pipeline] sh
[devops-lab] Running shell script
+ docker run -d --mount source=devops-lab,target=/root/data pmaugeri/alpine-akamai-cli
docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.37/containers/create: dial unix /var/run/docker.sock: connect: permission denied.
See 'docker run --help'.
Can you please help me to make this simple pipeline to work?
Thanks in advance
Pascal
Here is my system configuration:
MacOS: macOS Sierra v10.12.6
Jenkins version: 2.122
Docker version: 18.03.1-ce, build 9ee9f40
I have EC2 created by terraform, and I can login the ec2 using:
ssh -vvvv -i /home/ec2-user/.ssh/mykey.pub ec2-user#XX.XX.XX.XX without password,(XX.XX.XX.XX) is the IP of the EC2 created by terraform.
but when I try to run ansible file in terraform when ec2 is created, ansible cannot run and error message is:
aws_instance.dev (local-exec): TASK [Gathering Facts]
*********************************************************
The authenticity of host 'XX.XX.XX.XX (XX.XX.XX.XX)' can't be
established.
...
Are you sure you want to continue connecting (yes/no)?
aws_instance.dev: Still creating... (6m40s elapsed)
note the ansible yml is started after I manually force the terraform to sleep for 6m and at that time, the EC2 already started (I can login it myself, although it showed "aws_instance.dev: Still creating...") i.e.
resource "aws_instance" "dev" {
...
provisioner "local-exec" {
command = "sleep 6m && ansible-playbook -i hosts myansible.yml"
}
...
}
I run the terraform as ec2-user, I set ansible yml as:
remote_user: ec2-user
become_user: ec2-user
what is the reason the ansible cannot ssh to the EC2?
There is a message for you:
The authenticity of host 'XX.XX.XX.XX (XX.XX.XX.XX)' can't be
established.
...
Are you sure you want to continue connecting (yes/no)?
Either execute ssh-keyscan XX.XX.XX.XX before executing ansible-playbook, or disable host key checking in ansible.
I installed minikube as instructed here https://github.com/kubernetes/minikube/releases
and started with with a simple minikube start command.
But the next step, which is as simple as kubectl get pods --all-namespaces fails with
Unable to connect to the server: dial tcp 192.168.99.100:8443: i/o timeout
What did I miss?
I ran into the same issue using my Mac and basically I uninstalled both minikube and Kubectl and installed it as follows:
Installed Minikube.
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.8.0/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
Installed Kubectl.
curl -Lo kubectl http://storage.googleapis.com/kubernetes-release/release/v1.3.0/bin/darwin/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
Start a cluster, run the command:
minikube start
Minikube will also create a “minikube” context, and set it to default in kubectl. To switch back to this context later, run this command:
kubectl config use-context minikube
Now to get the list of all pods run the command:
kubectl get pods --all-namespaces
Now you should be able to get the list of pods. Also make sure that you don't have a firewall within your network that blocks the connections.
I faced a similar issue on win7 when changed work environment, as you said it is working fine at home but not working at office, high chance it caused by firewall policy, cannot pass TLS verification.
Instead of waste time on troubleshoot(sometimes nothing to do if you cannot turn off firewall), if you just want to test local minikube cluster, would suggest to disable TLS verification.
This is what I have done:
# How to disable minikube TLS verification
## disable TLS verification
$ VBoxManage controlvm minikube natpf1 k8s-apiserver,tcp,127.0.0.1,8443,,8443
$ VBoxManage controlvm minikube natpf1 k8s-dashboard,tcp,127.0.0.1,30000,,30000
$ kubectl config set-cluster minikube-vpn --server=https://127.0.0.1:8443 --insecure-skip-tls-verify
$ kubectl config set-context minikube-vpn --cluster=minikube-vpn --user=minikube
$ kubectl config use-context minikube-vpn
## test kubectl
$ kubectl get pods
## enable local docker client
$ VBoxManage controlvm minikube natpf1 k8s-docker,tcp,127.0.0.1,2374,,2376
$ eval $(minikube docker-env)
$ unset DOCKER_TLS_VERIFY
$ export DOCKER_HOST="tcp://127.0.0.1:2374"
$ alias docker='docker --tls'
## test local docker client
$ docker ps
## test minikube dashboard
curl http://127.0.0.1:30000
Also I make a small script for this for your reference.
Hope it is helpful for you.
You need to just restart minikube. Sometimes I have this problem when my computer has been off for a while. I don't think you need to reinstall anything.
First verify you are in the correct context
$ kubectl config current-context
minikube
Check Minikube status (status should show "Running", mine below showed "Saved")
$ minikube status
minikube: Saved
cluster:
kubectl:
Restart minikube
$ minikube start
Starting local Kubernetes v1.8.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Verify it is running (This is what you should see)
$ minikube status
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100
I had this issue when connected to Cisco AnyConnect VPN. Once I disconnected, minikube ran fine. Discussion on github here: https://github.com/kubernetes/minikube/issues/4540