We have configured two Kubernetes clouds in our jenkins
kubernetes: this one runs all the build jobs related to setting up infra, building spring boot apps and their docker images etc; it's our devops automation apps cluster
apps-dev: this is where all our apps are deployed so it's our runtime cluster, non production
I'm trying to run cucumber tests via maven but as we do not expose services via alb in the apps-dev cluster I wanted to run the maven job as a pod on the apps-dev cluster.
We've added pod label to the template but jenkins pipeline starts with just jnlp container.
pipeline {
agent {
kubernetes {
cloud 'apps-dev'
yamlFile '''
apiVersion: v1
kind: Pod
metadata:
labels:
jenkins/label: jenkins-apps-dev-agent
namespace: jenkins-jobs
containers:
- name: mavenjdk11
image: maven:3.8.2-adoptopenjdk-11
resources:
limits:
memory: "512Mi"
cpu: "500m"
requests:
memory: "256Mi"
cpu: "100m"
command:
- sleep
args:
- 99d
volumes:
- name: sharedvolume
emptyDir: {}
'''
}
}
...
and the logs
Created Pod: foo jenkins-jobs/foo-foo-integration-foo-feature-tests-17-m-4hhpq
[Normal][jenkins-jobs/foo-foo-integration-foo-feature-tests-17-m-4hhpq][LoggingEnabled] Successfully enabled logging for pod
Still waiting to schedule task
‘foo-foo-integration-foo-feature-tests-17-m-4hhpq’ is offline
[Normal][jenkins-jobs/foo-foo-integration-foo-feature-tests-17-m-4hhpq][Scheduled] Successfully assigned jenkins-jobs/foo-foo-integration-foo-feature-tests-17-m-4hhpq to fargate-ip-176-24-3-48.ec2.internal
[Normal][jenkins-jobs/foo-foo-integration-foo-feature-tests-17-m-4hhpq][Pulling] Pulling image "jenkins/inbound-agent:4.3-4"
[Normal][jenkins-jobs/foo-foo-integration-foo-feature-tests-17-m-4hhpq][TaintManagerEviction] Cancelling deletion of Pod jenkins-jobs/foo-foo-integration-foo-feature-tests-17-m-4hhpq
[Normal][jenkins-jobs/foo-foo-integration-foo-feature-tests-17-m-4hhpq][Pulled] Successfully pulled image "jenkins/inbound-agent:4.3-4" in 15.29095263s
[Normal][jenkins-jobs/foo-foo-integration-foo-feature-tests-17-m-4hhpq][Created] Created container jnlp
[Normal][jenkins-jobs/foo-foo-integration-foo-feature-tests-17-m-4hhpq][Started] Started container jnlp
Agent foo-foo-integration-foo-feature-tests-17-m-4hhpq is provisioned from template foo_foo-integration_foo-feature-tests_17-mnscq-lz8wj
---
apiVersion: "v1"
kind: "Pod"
metadata:
annotations:
buildUrl: "https://jenkins.cloud.bar.com/job/foo/job/foo-integration/job/foo-feature-tests/17/"
runUrl: "job/foo/job/foo-integration/job/foo-feature-tests/17/"
labels:
jenkins/label-digest: "baf69ff51703705e7e8252b6f5a54fc663dae21d"
jenkins/label: "foo_foo-integration_foo-feature-tests_17-mnscq"
name: "foo-foo-integration-foo-feature-tests-17-m-4hhpq"
spec:
containers:
- env:
- name: "JENKINS_SECRET"
value: "********"
- name: "JENKINS_AGENT_NAME"
value: "foo-foo-integration-foo-feature-tests-17-m-4hhpq"
- name: "JENKINS_WEB_SOCKET"
value: "true"
- name: "JENKINS_NAME"
value: "foo-foo-integration-foo-feature-tests-17-m-4hhpq"
- name: "JENKINS_AGENT_WORKDIR"
value: "/home/jenkins/agent"
- name: "JENKINS_URL"
value: "https://jenkins.cloud.bar.com/"
image: "jenkins/inbound-agent:4.3-4"
name: "jnlp"
resources:
limits: {}
requests:
memory: "256Mi"
cpu: "100m"
volumeMounts:
- mountPath: "/home/jenkins/agent"
name: "workspace-volume"
readOnly: false
nodeSelector:
kubernetes.io/os: "linux"
restartPolicy: "Never"
volumes:
- emptyDir:
medium: ""
name: "workspace-volume"
There's an error in my yaml, it should have been
apiVersion: v1
kind: Pod
metadata:
labels:
jenkins/label: jenkins-apps-dev-agent
namespace: jenkins-jobs
spec:
containers:
- name: mavenjdk11
image: maven:3.8.2-adoptopenjdk-11
resources:
limits:
memory: "512Mi"
cpu: "500m"
requests:
memory: "256Mi"
cpu: "100m"
command:
- sleep
args:
- 99d
volumes:
- name: sharedvolume
emptyDir: {}
the spec: before containers was missing
Related
I'm trying to setup an elasticsearch stateful set. I realise there a some similar questions that have been asked but none help in my circumstance.
The first version of setting up an elasticsearch stateful set worked fine with the following config:
apiVersion: v1
kind: PersistentVolume
metadata:
name: elasticsearch-volume
labels:
type: local
spec:
storageClassName: do-block-storage
capacity:
storage: 100M
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/elasticsearch"
---
apiVersion: v1
kind: PersistentVolumeClaim # Create PVC
metadata:
name: elasticsearch-volume-claim # Sets PVC's name
labels:
app: elasticsearch # Defines app to create PVC for
spec:
storageClassName: do-block-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100M # Sets PVC's size
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
spec:
type: ClusterIP
clusterIP: None
selector:
app: elasticsearch
ports:
- port: 9200 # To get at the elasticsearch container, just hit the service on 9200
targetPort: 9200 # routes to the exposed port on elasticsearch
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch # name of stateful
namespace: default
spec:
serviceName: elasticsearch
replicas: 1
selector:
matchLabels:
app: elasticsearch # should match service > spec.slector.app.
template:
metadata:
labels:
app: elasticsearch
spec:
volumes:
- name: elasticsearch-pvc
persistentVolumeClaim:
claimName: elasticsearch-volume-claim
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:8.2.3
resources:
limits:
cpu: 100m
requests:
cpu: 100m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: elasticsearch-pvc
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: search
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.type
value: single-node
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
- name: xpack.security.enabled
value: "false"
initContainers:
- name: fix-permissions
image: busybox
command:
["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: elasticsearch-pvc
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
I then tried to implement a version of this with multiple replica's:
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
spec:
type: ClusterIP
clusterIP: None
selector:
app: elasticsearch
ports:
- port: 9200 # To get at the elasticsearch container, just hit the service on 9200
targetPort: 9200 # routes to the exposed port on elasticsearch
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster # name of stateful
spec:
serviceName: elasticsearch
replicas: 2
selector:
matchLabels:
app: elasticsearch # should match service > spec.slector.app.
volumeClaimTemplates:
- metadata:
name: elasticsearch-pvc
labels:
app: elasticsearch
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100M
storageClassName: do-block-storage
template:
metadata:
labels:
app: elasticsearch
spec:
# volumes:
# - name: elasticsearch-pvc
# persistentVolumeClaim:
# claimName: elasticsearch-volume-claim
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:8.2.3
resources:
limits:
cpu: 100m
requests:
cpu: 100m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: elasticsearch-pvc
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: search
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
initContainers:
- name: fix-permissions
image: busybox
command:
["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: elasticsearch-pvc
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
However I ran into the error: 0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
I subsequently reduced the replica's to just 1 and manually created the PV in case DO was having an issue creating the PVC without a PV (even though DO should dynamically create the PVC and PV because it works with the postgres multi-replica stateful set which I set up in exactly the same way):
apiVersion: v1
kind: PersistentVolume
metadata:
name: es-volume-1
spec:
capacity:
storage: 100M
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: do-block-storage
hostPath:
path: "/data/elasticsearch"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- es-cluster-0
This again yielded the error: 0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
After spending a while de-bugging I gave up and decided to revert back to my single replica elasticsearch stateful set using the method I had originally used.
But once again I got the error 0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.!!!
I don't have a clue what's going on here. Why am I getting this error even though I'm only trying to create a single replica and I have manually defined the PV and PVC which worked fine before??
Turns out the issue was indeed Digital Ocean specific. In the second attempt when I tried to create multiple replica's I had to use dynamic volume provisioning via volumeClaimTemplates and set the storage class to do-block-storage which as it turns out has a minimum limit of 1Gi!
Alas when I updated to 1Gi it all started working.
Trying to setup elasticsearch cluster on kube, the problem i am having is that each pod isn't able to talk to the others by the respective hostnames, but the ip address works.
So for example i'm trying to currently setup 3 master nodes, es-master-0, es-master-1 and es-master-2 , if i log into one of the containers and ping another based on the pod ip it's fine, but i i try to ping say es-master-1 from es-master-0 based on the hostname it can't find it.
Clearly missing something here. Currently launching this config to try get it working:
apiVersion: v1
kind: Service
metadata:
name: ed
labels:
component: elasticsearch
role: master
spec:
selector:
component: elasticsearch
role: master
ports:
- name: transport1
port: 9300
protocol: TCP
clusterIP: None
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-master
labels:
component: elasticsearch
role: master
spec:
selector:
matchLabels:
component: elasticsearch
role: master
serviceName: ed
replicas: 3
template:
metadata:
labels:
component: elasticsearch
role: master
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- { key: es-master, operator: In, values: [ "true" ] }
initContainers:
- name: init-sysctl
image: busybox:1.27.2
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
dnsPolicy: "None"
dnsConfig:
options:
- name: ndots
value: "6"
nameservers:
- 10.85.0.10
searches:
- ed.es.svc.cluster.local
- es.svc.cluster.local
- svc.cluster.local
- cluster.local
- home
- node1
containers:
- name: es-master
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.5
imagePullPolicy: Always
securityContext:
privileged: true
env:
- name: ES_JAVA_OPTS
value: -Xms2048m -Xmx2048m
resources:
requests:
cpu: "0.25"
limits:
cpu: "2"
ports:
- containerPort: 9300
name: transport1
livenessProbe:
tcpSocket:
port: transport1
initialDelaySeconds: 60
periodSeconds: 10
volumeMounts:
- name: storage
mountPath: /data
- name: config
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
subPath: elasticsearch.yml
volumes:
- name: config
configMap:
name: es-master-config
volumeClaimTemplates:
- metadata:
name: storage
spec:
storageClassName: "local-path"
accessModes: [ ReadWriteOnce ]
resources:
requests:
storage: 2Gi
It's clearly somehow not resolving the hostnames
For pod to pod communication you can use k8s service which you had defined.
i am facing a problem with my sonar i've been trying to set it up but i get this error from : kubectl logs sonar-574d99bfb5-dr8nx -n sonar == container "sonarqube" in pod "sonar-574d99bfb5-dr8nx" is waiting to start: CreateContainerConfigError.
and when i do describe : kubectl describe pod sonar-574d99bfb5-dr8nx -n sonar
i get this :
Name: sonar-574d99bfb5-dr8nx
Namespace: sonar
Priority: 0
Node: master01/192.168.137.136
Start Time: Tue, 22 Mar 2022 20:30:16 +0000
Labels: app=sonar
pod-template-hash=574d99bfb5
Annotations: cni.projectcalico.org/containerID: 734ba33acb9e2c007861112ffe7c1fce84fa3a434494a0df6951a7b4b6b8dacb
cni.projectcalico.org/podIP: 10.42.241.105/32
cni.projectcalico.org/podIPs: 10.42.241.105/32
Status: Pending
IP: 10.42.241.105
IPs:
IP: 10.42.241.105
Controlled By: ReplicaSet/sonar-574d99bfb5
Containers:
sonarqube:
Container ID:
Image: sonarqube:latest
Image ID:
Port: 9000/TCP
Host Port: 0/TCP
State: Waiting
Reason: CreateContainerConfigError
Ready: False
Restart Count: 0
Limits:
memory: 2Gi
Requests:
memory: 1Gi
Environment Variables from:
sonar-config ConfigMap Optional: false
Environment: <none>
Mounts:
/opt/sonarqube/data/ from app-pvc (rw,path="data")
/opt/sonarqube/extensions/ from app-pvc (rw,path="extensions")
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q22lb (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
app-pvc:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: sonar-pvc
ReadOnly: false
kube-api-access-q22lb:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 12m default-scheduler Successfully assigned sonar/sonar-574d99bfb5-dr8nx to master01
Warning Failed 10m (x12 over 12m) kubelet Error: stat /home/mtst/data-sonar-pvc: no such file or directory
Normal Pulled 2m24s (x50 over 12m) kubelet Container image "sonarqube:latest" already present on machine
here's my pvc yaml :
apiVersion: v1
kind: PersistentVolume
metadata:
name: sonar-pv
namespace: sonar
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/home/mtst/data-sonar-pvc"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: sonar-pvc
namespace: sonar
labels:
type: local
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
if there's anything that can help me resolve my issue i would appreciate it.
thank you.
I had the same issue with the awx postgres deployment.
Credit goes to Kubernetes 1.17.2 Rancher 2.3.5 CreateContainerConfigError: stat no such file or directory but the directory IS there for resolving the issue for me.
Running a Rancher kubernetes cluster and wanting custom PV I needed to add the below type to my PersistentVolume definition:
hostPath:
path: "/home/mtst/data-sonar-pvc"
type: "DirectoryOrCreate"
I am trying to deploy an elasticsearch statfulset and have the storage provisioned by rook-ceph storageclass.
The pod is in pending mode because of:
Warning FailedScheduling 87s default-scheduler 0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.
The statefull set looks like this:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
namespace: rcc
labels:
app: elasticsearch
tier: backend
type: db
spec:
serviceName: es
replicas: 3
selector:
matchLabels:
app: elasticsearch
tier: backend
type: db
template:
metadata:
labels:
app: elasticsearch
tier: backend
type: db
spec:
terminationGracePeriodSeconds: 300
initContainers:
- name: fix-the-volume-permission
image: busybox
command:
- sh
- -c
- chown -R 1000:1000 /usr/share/elasticsearch/data
securityContext:
privileged: true
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
- name: increase-the-vm-max-map-count
image: busybox
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
- name: increase-the-ulimit
image: busybox
command:
- sh
- -c
- ulimit -n 65536
securityContext:
privileged: true
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.7.1
ports:
- containerPort: 9200
name: http
- containerPort: 9300
name: tcp
resources:
requests:
memory: 4Gi
limits:
memory: 6Gi
env:
- name: cluster.name
value: elasticsearch
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.zen.ping.unicast.hosts
value: "elasticsearch-0.es.default.svc.cluster.local,elasticsearch-1.es.default.svc.cluster.local,elasticsearch-2.es.default.svc.cluster.local"
- name: ES_JAVA_OPTS
value: -Xms4g -Xmx4g
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
storageClassName: rook-cephfs
resources:
requests:
storage: 5Gi
and the reson of the pvc is not getting created is:
Normal FailedBinding 47s (x62 over 15m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
Any idea what I do wrong?
After adding rook-ceph-block storage class and changing the storageclass to that it worked without any issues.
Hello I'm currently setting up a rook-cephfs test environment using minikube running on Windows 10.
So far I've ran crds.yaml, common.yaml, operator.yaml and cluster-test.yaml. I following the guide at https://github.com/kubernetes/kubernetes/tree/release-1.9/cluster/addons/registry to set up the storage.
From this guide, I've created the ReplicationController and the service. The issue that I'm having is that when I run kubectl get svc, I don't see the service. Any idea on why its not showing up? Thanks
service.yaml
apiVersion: v1
kind: Service
metadata:
name: kube-registry
namespace: kube-system
labels:
k8s-app: kube-registry-upstream
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeRegistry"
spec:
selector:
k8s-app: kube-registry-upstream
ports:
- name: registry
port: 5000
protocol: TCP
Docker registry
apiVersion: v1
kind: ReplicationController
metadata:
name: kube-registry-v0
namespace: kube-system
labels:
k8s-app: kube-registry-upstream
version: v0
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kube-registry-upstream
version: v0
template:
metadata:
labels:
k8s-app: kube-registry-upstream
version: v0
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: registry
image: registry:2
resources:
limits:
cpu: 100m
memory: 100Mi
env:
- name: REGISTRY_HTTP_ADDR
value: :5000
- name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY
value: /var/lib/registry
volumeMounts:
- name: image-store
mountPath: /var/lib/registry
ports:
- containerPort: 5000
name: registry
protocol: TCP
volumes:
- name: image-store
emptyDir: {}
Based on the service yaml you shared, the service in getting created in kube-system namespace.
You can view the service using the -n option to specify the namespace
kubectl get svc kube-registry -n kube-system