Kubernetes Plugin- Declarative Pipeline -ERROR: Node is not a Kubernetes node: - jenkins-pipeline

The declarative pipeline by defining yaml inside the kuberentes section of agent is not working. I was using jenkins 2.176.x LTS version. I am getting the following error in the console "ERROR: Node is not a Kubernetes node:"
I have tried all the existing solutions available in stack overflow.
Please find the pipeline code:
pipeline {
agent {
kubernetes {
//cloud 'kubernetes'
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: maven
image: maven:3.3.9-jdk-8-alpine
command: ['cat']
tty: true
"""
}
}
stages {
stage('Run maven') {
steps {
container('maven') {
sh 'mvn -version'
}
}
}
}
}
It should deploy the pod and run the command

You must supply a label to the kubernetes block:
kubernetes {
label 'mylabel'
yaml """
....
}

Related

Authenticate private google cloud artifact registry from Tekton Pipeline

I have a local setup of Tekton Pipeline, which has task to clone and build application . The code base is on java so, clones the code from BitBucket and does a Gradle build.
gradle.build
repositories {
mavenCentral()
maven {
url "artifactregistry://${LOCATION}-maven.pkg.dev/${PROJECT}/${REPOSITORY}"
}
}
gradle.properties
LOCATION=australia-southeast2
PROJECT=fetebird-350310
REPOSITORY=common
I have set up a service account and passed it to the pipeline run
apiVersion: v1
kind: Secret
metadata:
name: gcp-secret
namespace: tekton-pipelines
type: kubernetes.io/opaque
stringData:
gcs-config: |
{
"type": "service_account",
"project_id": "xxxxx-350310",
"private_key_id": "28e8xxxxx2642a8a0cd9cd5c2696",
"private_key": "-----BEGIN PRIVATE KEY-----\nMIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQC3zOuPTogiZ2kU\nEYsGMCl4lUO48GSLOjOH1lwkQ76zxL\n0F6cfpV/8iwao/9IOqsmKoRPUZcQjqXFMEuCYJNhoScsn4TAYMNBeATVq+2JJ/5T\n2e7YfbbPVue9R36MfTwqDeI=\n-----END PRIVATE KEY-----\n",
"client_email": "xxxxxx#xxxx-xxxx.iam.gserviceaccount.com",
"client_id": "xxxxxxxxxxxxxxxxx",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/fetebird%40fetebird-350310.iam.gserviceaccount.com"
}
service-account
apiVersion: v1
kind: ServiceAccount
metadata:
name: service-account
secrets:
- name: git-ssh-auth
- name: gcp-secret
During the Gradle build, getting an exception as
> Could not resolve fete.bird:common:1.0.1.
2022-06-22T12:43:41.599990295Z > Could not get resource 'https://australia-southeast2-maven.pkg.dev/fetebird-350310/common/fete/bird/common/1.0.1/common-1.0.1.pom'.
2022-06-22T12:43:41.606630129Z > Could not GET 'https://australia-southeast2-maven.pkg.dev/fetebird-350310/common/fete/bird/common/1.0.1/common-1.0.1.pom'. Received status code 403 from server: Forbidden
implementation("fete.bird:common:1.0.1") is published on GCP artifact registry
In local development, the Gradle build is working file, because the service key is exported as environment variable export GOOGLE_APPLICATION_CREDENTIALS="file-location.json"
Pipeline Run
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: run-pipeline
namespace: tekton-pipelines
spec:
serviceAccountNames:
- taskName: clone-repository
serviceAccountName: git-service-account
- taskName: build
serviceAccountName: gcp-service-account
pipelineRef:
name: fetebird-discount
workspaces:
- name: shared-workspace
persistentVolumeClaim:
claimName: fetebird-discount-pvc
params:
- name: repo-url
value: git#bitbucket.org:anandjaisy/discount.git

How to configure Elasticsearch Index Lifecycle Management (ILM) durring installation in YAML file

I would like to configure default Index Lifecycle Management (ILM) policy and index template durring installation ES in kubernetes cluster, in the YAML installation file, instead of calling ES API after installation. How can I do that?
I have Elasticsearch installed in kubernetes cluster based on YAML file.
The following works queries work.
PUT _ilm/policy/logstash_policy
{
"policy": {
"phases": {
"delete": {
"min_age": "30d",
"actions": {
"delete": {}
}
}
}
}
}
PUT _template/logstash_template
{
"index_patterns": ["logstash-*"],
"settings": {
"number_of_shards": 1,
"number_of_replicas": 1,
"index.lifecycle.name": "logstash_policy"
}
}
I would like to have above setup just after installation, without making any curl queries.
I'll try to answer both of your questions.
index template
You can pass the index template with this configuration in your elasticsearch yaml. For instance:
setup.template:
name: "<chosen template name>-%{[agent.version]}"
pattern: "<chosen pattern name>-%{[agent.version]}-*"
Checkout the ES documentation to see where exactly this setup.template belongs and you're good to go.
ilm policy
The way to make this work is to get the ilm-policy.json file that has your ilm configuration to the pod's /usr/share/filebeat/ directory. in your YAML installation file, you can then use this line in your config to get it to work (I've added my whole ilm config):
setup.ilm:
enabled: true
policy_name: "<policy name>"
rollover_alias: "<rollover alias name
policy_file: "ilm-policy.json"
pattern: "{now/d}-000001"
So, how to get the file there? The ingredients are 1 configmap containing your ilm-policy.json, and a volume and volumeMount in your daemonset configuration to mount the configmap's contents to the pod's directories.
Note: I used helm for deploying filebeat to an AKS cluster (v 1.15), which connects to Elastic cloud. In your case, the application folder to store your json will probably be /usr/share/elasticsearch/ilm-policy.json.
Below, you'll see a line like {{ .Files.Get <...> }}, which is a templating function for helm getting the contents of the files. Alternatively, you can copy the file contents directly into the configmap yaml, but to have the file separate makes it better managable in my opinion.
The configMap
Make sure your ilm-policy.json is somewhere reachable by your deployments. This is how the configmap can look:
apiVersion: v1
kind: ConfigMap
metadata:
name: ilmpolicy-config
namespace: logging
labels:
k8s-app: filebeat
data:
ilm-policy.json: |-
{{ .Files.Get "ilm-policy.json" | indent 4 }}
The Daemonset
at the deamonSet's volumeMounts section, append this:
- name: ilm-configmap-volume
mountPath: /usr/share/filebeat/ilm-policy.json
subPath: ilm-policy.json
readOnly: true
and at the volume section append this:
- name: ilm-configmap-volume
configMap:
name: ilmpolicy-config
I'm not exactly sure the spacing is correct in the browser, but this should give a pretty good idea.
I hope this works for your setup! good luck.
I've used the answer to get a custom policy in place for Packetbeat running with ECK.
The ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: packetbeat-ilmpolicy
labels:
k8s-app: packetbeat
data:
ilm-policy.json: |-
{
"policy": {
"phases": {
"hot": {
"min_age": "0ms",
"actions": {
"rollover": {
"max_age": "1d"
}
}
},
"delete": {
"min_age": "1d",
"actions": {
"delete": {}
}
}
}
}
}
The Beat config:
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: packetbeat
spec:
type: packetbeat
elasticsearchRef:
name: demo
kibanaRef:
name: demo
config:
pipeline: geoip-info
packetbeat.interfaces.device: any
packetbeat.protocols:
- type: dns
ports: [53]
include_authorities: true
include_additionals: true
- type: http
ports: [80, 8000, 8080, 9200, 9300]
- type: tls
ports: [443, 993, 995, 5223, 8443, 8883, 9243]
packetbeat.flows:
timeout: 30s
period: 30s
processors:
- add_cloud_metadata: {}
- add_host_metadata: {}
setup.ilm:
enabled: true
overwrite: true
policy_name: "packetbeat"
policy_file: /usr/share/packetbeat/ilm-policy.json
pattern: "{now/d}-000001"
daemonSet:
podTemplate:
spec:
terminationGracePeriodSeconds: 30
hostNetwork: true
automountServiceAccountToken: true # some older Beat versions are depending on this settings presence in k8s context
dnsPolicy: ClusterFirstWithHostNet
tolerations:
- operator: Exists
containers:
- name: packetbeat
securityContext:
runAsUser: 0
capabilities:
add:
- NET_ADMIN
volumeMounts:
- name: ilmpolicy-config
mountPath: /usr/share/packetbeat/ilm-policy.json
subPath: ilm-policy.json
readOnly: true
volumes:
- name: ilmpolicy-config
configMap:
name: packetbeat-ilmpolicy
The important parts in the Beat config are the Volume mount where we mount the configmap into the container.
After this we can reference the file in the config with setup.ilm.policy_file.

Jenkins JNLP slave is stuck on progress bar when need to run Maven job

I have a problem with my Jenkins which runs on K8s.
My Pipeline is built with 2 pods - Jnlp alpine (default for k8s) and Maven (3.6.0 based on java-8:jdk-8u191-slim image).
From time to time, after staring a new build it's getting stuck with no progress with the Build.
Entering the Pod:
Jnlp - seems to be functioning as expected
Maven - no job is running (running ps -ef).
Appreciate your help.
Tried to pause / resume - not solve it.
The only way is abort and reinitiate the build.
Jenkins version - 2.164.1
My pipeline is :
properties([[$class: 'RebuildSettings', autoRebuild: false, rebuildDisabled: false],
parameters([string(defaultValue: 'master', description: '', name: 'branch', trim: false),
string(description: 'enter your namespace name', name: 'namespace', trim: false),])])
def label = "jenkins-slave-${UUID.randomUUID().toString()}"
podTemplate(label: label, namespace: "${params.namespace}", yaml: """
apiVersion: v1
kind: Pod
spec:
nodeSelector:
group: maven-ig
containers:
- name: maven
image: accontid.dkr.ecr.us-west-2.amazonaws.com/base_images:maven-base
command: ['cat']
resources:
limits:
memory: "16Gi"
cpu: "16"
requests:
memory: "10Gi"
cpu: "10"
tty: true
env:
- name: ENVIRONMENT
value: dev
- name: config.env
value: k8s
- mountPath: "/local"
name: nvme
volumes:
- name: docker
hostPath:
path: /var/run/docker.sock
- name: nvme
hostPath:
path: /local
"""
) {
node(label) {
wrap([$class: 'TimestamperBuildWrapper']) {
checkout([$class: 'GitSCM', branches: [[name: "*/${params.branch}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'credetials', url: 'https://github.com/example/example.git']]])
wrap([$class: 'BuildUser']) {
user = env.BUILD_USER_ID
}
currentBuild.description = 'Branch: ' + "${branch}" + ' | Namespace: ' + "${user}"
stage('Stable tests') {
container('maven') {
try {
sh "find . -type f -name '*k8s.*ml' | xargs sed -i -e 's|//mysql|//mysql.${user}.svc.cluster.local|g'"
sh "mvn -f pom.xml -Dconfig.env=k8s -Dwith_stripe_stub=true -Dpolicylifecycle.integ.test.url=http://testlifecycleservice-${user}.testme.io -Dmaven.test.failure.ignore=false -Dskip.surefire.tests=true -Dmaven.test.skip=false -Dskip.package.for.deployment=true -T 1C clean verify -fn -P stg-stable-it"
}
finally {
archive "**/target/failsafe-reports/*.xml"
junit '**/target/failsafe-reports/*.xml'
}
}
}
// stage ('Delete namespace') {
// build job: 'delete-namspace', parameters: [
// [$class: 'StringParameterValue', name: 'namespace', value: "${user}"]], wait: false
// }
}
}
}

Multiline string annotations for terraform kubernetes provider

I would like to set up Ambassador as an API Gateway for kubernetes using terraform. There are several ways how to configure Ambassador. The recommended way, according to documentation, is by using kubernetes annotations for each service that is routed and exposed outside the cluster. This is done easily using kubernetes yaml configuration:
kind: Service
apiVersion: v1
metadata:
name: my-service
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v0
kind: Mapping
name: my_service_mapping
prefix: /my-service/
service: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
The getambassador.io/config field's value starting with | suggest it is a multiline string value. How to achieve the same thing using terraform HCL?
Terraform documentation contains a section about multiline strings using <<EOF your multiline string EOF:
resource "kubernetes_service" "my-service" {
"metadata" {
name = "my-service"
annotations {
"getambassador.io/config" = <<EOF
apiVersion: ambassador/v0
kind: Mapping
name: my_service_mapping
prefix: /my-service/
service: my-service
EOF
}
}
"spec" {
selector {
app = "MyApp"
}
port {
protocol = "TCP"
port = 80
target_port = "9376"
}
}
}
Make sure there is no triple dash (---) from yaml configuration. Terraform parses it incorrectly.

Cannot access Kibana dashboard

I am trying to deploy Kibana in my Kubernetes cluster which is on AWS. To access the Kibana dashboard I have created an ingress which is mapped to xyz.com. Here is my Kibana deployment file.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: kibana
labels:
component: kibana
spec:
replicas: 1
selector:
matchLabels:
component: kibana
template:
metadata:
labels:
component: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana-oss:6.3.2
env:
- name: CLUSTER_NAME
value: myesdb
- name: SERVER_BASEPATH
value: /
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 5601
name: http
readinessProbe:
httpGet:
path: /api/status
port: http
initialDelaySeconds: 20
timeoutSeconds: 5
volumeMounts:
- name: config
mountPath: /usr/share/kibana/config
readOnly: true
volumes:
- name: config
configMap:
name: kibana-config
Whenever I deploy it, it gives me the following error. What should my SERVER_BASEPATH be in order for it to work? I know it defaults to /app/kibana.
FATAL { ValidationError: child "server" fails because [child "basePath" fails because ["basePath" with value "/" fails to match the start with a slash, don't end with one pattern]]
at Object.exports.process (/usr/share/kibana/node_modules/joi/lib/errors.js:181:19)
at internals.Object._validateWithOptions (/usr/share/kibana/node_modules/joi/lib/any.js:651:31)
at module.exports.internals.Any.root.validate (/usr/share/kibana/node_modules/joi/lib/index.js:121:23)
at Config._commit (/usr/share/kibana/src/server/config/config.js:119:35)
at Config.set (/usr/share/kibana/src/server/config/config.js:89:10)
at Config.extendSchema (/usr/share/kibana/src/server/config/config.js:62:10)
at _lodash2.default.each.child (/usr/share/kibana/src/server/config/config.js:51:14)
at arrayEach (/usr/share/kibana/node_modules/lodash/index.js:1289:13)
at Function.<anonymous> (/usr/share/kibana/node_modules/lodash/index.js:3345:13)
at Config.extendSchema (/usr/share/kibana/src/server/config/config.js:50:31)
at new Config (/usr/share/kibana/src/server/config/config.js:41:10)
at Function.withDefaultSchema (/usr/share/kibana/src/server/config/config.js:34:12)
at KbnServer.exports.default (/usr/share/kibana/src/server/config/setup.js:9:37)
at KbnServer.mixin (/usr/share/kibana/src/server/kbn_server.js:136:16)
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:188:7)
isJoi: true,
name: 'ValidationError',
details:
[ { message: '"basePath" with value "/" fails to match the start with a slash, don\'t end with one pattern',
path: 'server.basePath',
type: 'string.regex.name',
context: [Object] } ],
_object:
{ pkg:
{ version: '6.3.2',
branch: '6.3',
buildNum: 17307,
buildSha: '53d0c6758ac3fb38a3a1df198c1d4c87765e63f7' },
dev: { basePathProxyTarget: 5603 },
pid: { exclusive: false },
cpu: { cgroup: [Object] },
cpuacct: { cgroup: [Object] },
server: { name: 'kibana', host: '0', basePath: '/' } },
annotate: [Function] }
I followed this guide https://github.com/pires/kubernetes-elasticsearch-cluster
Any idea what might be the issue ?
I believe that the example config in the official kibana repository gives a hint on the cause of this problem, here's the server.basePath setting:
# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""
The fact that the server.basePath cannot end in a slash could mean that kibana interprets your setting as ending in a slash basically. I've not dug deeper into this though.
This error message is interesting:
message: '"basePath" with value "/" fails to match the start with a slash, don\'t end with one pattern'
So this error message are a complement to the documentation: don't end in a slash and don't start with a slash. Something like that.
I reproduced this in minikube using your Deployment manifest but i removed the volume mount parts at the end. Changing SERVER_BASEPATH to /<SOMETHING> works fine, so basically i think you just need to set a proper basepath.

Resources