Ansible handler for restarting Docker Swarm service - ansible

I need to restart containers of a Docker Swarm service with Ansible.
The basic definition looks like this:
# tasks/main.yml
- name: 'Create the service container'
docker_swarm_service:
name: 'service'
image: 'service'
networks:
- name: 'internet'
- name: 'reverse-proxy'
publish:
- { target_port: '80', published_port: '80', mode: 'ingress' }
- { target_port: '443', published_port: '443', mode: 'ingress' }
- { target_port: '8080', published_port: '8080', mode: 'ingress' }
mounts:
- { source: '{{ shared_dir }}', target: '/shared' }
replicas: 1
placement:
constraints:
- node.role == manager
restart_config:
condition: 'on-failure'
user: null
force_update: yes
So I thought that
# handlers/main.yml
- name: 'Restart Service'
docker_swarm_service:
name: some-service
image: 'some-image'
force_update: yes
should work as a handler but it seems that it's not taking all options.
So any advice how to properly restart containers of a Docker Swarm service?

Related

How to get status of Kaniko build

After doing a Kaniko build to Openshift I notice that Jenkins displays the status as Success as long as nothing is wrong with the code itself in the Jenkinsfile.
However the after the Success message the build is still going on in Openshift and it could go wrong depending on the Dockerfile contexts etc..
How could I have Jenkins wait for the status of the build in Openshift and have it display its results in the Jenkins console??
Here is my code:
#!/usr/bin/env groovy
def projectProperties = [
[$class: 'BuildDiscarderProperty',strategy: [$class: 'LogRotator', numToKeepStr: '5']]
]
node {
withVault(configuration: [timeout: 60, vaultCredentialId: 'vault-approle-prd', vaultUrl: 'https://vault.example.redacted.com'], vaultSecrets: [[path: 'secrets/kaniko', secretValues: [[vaultKey: 'ghp_key']]]])
{
sh 'echo $ghp_key > output.txt'
}
}
node {
GHP_KEY = readFile('output.txt').trim()
}
pipeline {
agent {
kubernetes {
cloud 'openshift'
idleMinutes 15
activeDeadlineSeconds 1800
yaml """
apiVersion: v1
kind: Pod
metadata:
name: kaniko
spec:
volumes:
- name: build-context
emptyDir: {}
- name: kaniko-secret
secret:
secretName: regcred-${NAMESPACE}
items:
- key: .dockerconfigjson
path: config.json
securityContext:
runAsUser: 0
serviceAccount: kaniko
initContainers:
- name: kaniko-init
image: ubuntu
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:latest
args: ["--context=git://${GHP_KEY}#github.com/Redacted/dockerfiles.git#refs/heads/${BRANCH}",
"--destination=image-registry.openshift-image-registry.svc:5000/${NAMESPACE}/${IMAGE_NAME}:${IMAGE_TAG}",
"--dockerfile=/jenkins-slave-ansible/Dockerfile",
"--skip-tls-verify",
"--verbosity=debug"]
resources:
limits:
cpu: 1
memory: 5Gi
requests:
cpu: 100m
memory: 256Mi
volumeMounts:
- name: build-context
mountPath: /kaniko/build-context
- name: kaniko-secret
mountPath: /kaniko/.docker
restartPolicy: Never
"""
}
}
parameters {
choice(name: 'NAMESPACE', choices: ['cloud', 'ce-jenkins-testing'])
string(defaultValue: 'feature/', description: 'Please enter your branch name', name: 'BRANCH')
string(defaultValue: 'jenkins-slave-ansible', description: 'Please enter your image name (e.g.: jenkins-slave-ansible)', name: 'IMAGE_NAME')
string(defaultValue: 'latest', description: 'Please add your tag (e.g.: 1.72.29)', name: 'IMAGE_TAG')
}
options {
timestamps ()
}
stages {
stage('Image Test') {
steps {
echo "Image name: ${IMAGE_NAME}"
echo "Image tag: ${IMAGE_TAG}"
}
}
}
}

ECK Filebeat Daemonset Forwarding To Remote Cluster

I wish to forward logs from remote EKS clusters to a centralised EKS cluster hosting ECK.
Versions in use:
EKS v1.20.7
Elasticsearch v7.7.0
Kibana v7.7.0
Filebeat v7.10.0
The setup is using a AWS NLB to forward requests to Nginx ingress, using host based routing.
When the DNS lookup (filebeat test output) for the Elasticsearch is tested on Filebeat, it validates the request.
But the logs for Filebeat are telling a different story.
2021-10-05T10:39:00.202Z ERROR [publisher_pipeline_output]
pipeline/output.go:154 Failed to connect to backoff(elasticsearch(https://elasticsearch.dev.example.com:9200)):
Get "https://elasticsearch.dev.example.com:9200": Bad Request
The Filebeat agents can connect to the remote Elasticsearch via the NLB, when using a curl request.
The config is below. NB: dev.example.com is the remote cluster hosing ECK.
app:
name: "filebeat"
configmap:
enabled: true
filebeatConfig:
filebeat.yml: |-
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints.enabled: true
templates:
- config:
- type: container
paths:
- /var/lib/docker/containers/*/${data.kubernetes.container.id}-json.log
exclude_lines: ["^\\s+[\\-`('.|_]"]
processors:
- drop_event.when.not.or:
- contains.kubernetes.namespace: "apps-"
- equals.kubernetes.namespace: "cicd"
- decode_json_fields:
fields: ["message"]
target: ""
process_array: true
overwrite_keys: true
- add_fields:
fields:
kubernetes.cluster.name: dev-eks-cluster
target: ""
processors:
- add_cloud_metadata: ~
- add_host_metadata: ~
cloud:
id: '${ELASTIC_CLOUD_ID}'
cloud:
auth: '${ELASTIC_CLOUD_AUTH}'
output:
elasticsearch:
enabled: true
hosts: "elasticsearch.dev.example.com"
username: '${ELASTICSEARCH_USERNAME}'
password: '${ELASTICSEARCH_PASSWORD}'
protocol: https
ssl:
verification_mode: "none"
headers:
Host: "elasticsearch.dev.example.com"
proxy_url: "https://example.elb.eu-west-2.amazonaws.com"
proxy_disable: false
daemonset:
enabled: true
version: 7.10.0
image:
repository: "docker.elastic.co/beats/filebeat"
tag: "7.10.0"
pullPolicy: Always
extraenvs:
- name: ELASTICSEARCH_HOST
value: "https://elasticsearch.dev.example.com"
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: "elastic"
- name: ELASTICSEARCH_PASSWORD
value: "remote-cluster-elasticsearch-es-elastic-user-password"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
clusterrolebinding:
enabled: true
namespace: monitoring
clusterrole:
enabled: true
serviceaccount:
enabled: true
namespace: monitoring
deployment:
enabled: false
configmap:
enabled: false
Any tips or suggestions on how to enable Filebeat forwarding, would be much appreciated :-)
#1 Missing ports:
Even with the ports added in as suggested. Filebeat is erroring with:
2021-10-06T08:34:41.355Z ERROR [publisher_pipeline_output] pipeline/output.go:154 Failed to connect to backoff(elasticsearch(https://elasticsearch.dev.example.com:9200)): Get "https://elasticsearch.dev.example.com:9200": Bad Request
...using a AWS NLB to forward requests to Nginx ingress, using host based routing
How about unset proxy_url and proxy_disable, then set hosts: ["<nlb url>:<nlb listener port>"]
The final working config:
app:
name: "filebeat"
configmap:
enabled: true
filebeatConfig:
filebeat.yml: |-
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints.enabled: true
templates:
- config:
- type: container
paths:
- /var/lib/docker/containers/*/${data.kubernetes.container.id}-json.log
exclude_lines: ["^\\s+[\\-`('.|_]"]
processors:
- drop_event.when.not.or:
- contains.kubernetes.namespace: "apps-"
- equals.kubernetes.namespace: "cicd"
- decode_json_fields:
fields: ["message"]
target: ""
process_array: true
overwrite_keys: true
- add_fields:
fields:
kubernetes.cluster.name: qa-eks-cluster
target: ""
processors:
- add_cloud_metadata: ~
- add_host_metadata: ~
cloud:
id: '${ELASTIC_CLOUD_ID}'
cloud:
auth: '${ELASTIC_CLOUD_AUTH}'
output:
elasticsearch:
enabled: true
hosts: ["elasticsearch.dev.example.com:9200"]
username: '${ELASTICSEARCH_USERNAME}'
password: '${ELASTICSEARCH_PASSWORD}'
protocol: https
ssl:
verification_mode: "none"
daemonset:
enabled: true
version: 7.10.0
image:
repository: "docker.elastic.co/beats/filebeat"
tag: "7.10.0"
pullPolicy: Always
extraenvs:
- name: ELASTICSEARCH_HOST
value: "https://elasticsearch.dev.example.com"
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: "elastic"
- name: ELASTICSEARCH_PASSWORD
value: "remote-cluster-elasticsearch-es-elastic-user-password"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
clusterrolebinding:
enabled: true
namespace: monitoring
clusterrole:
enabled: true
serviceaccount:
enabled: true
namespace: monitoring
deployment:
enabled: false
configmap:
enabled: false
In addition the following changes were needed:
NBL:
Add listener for 9200 forwarding to the Ingress Controller for HTTPS
SG:
Opened up port 9200 on the EKS worker nodes

How can I edit path.data and path.log for elasticsearch on Kubernetes?

I made an es-deploy.yml file then I typed the path.log and path.data values.
After creating the pod, I checked that directory then there was nothing.
The setting did not work!
How can I edit path.data and path.log for elasticsearch on Kubernetes!
I also tried using PATH_DATA
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: es
labels:
component: elasticsearch
spec:
replicas: 1
template:
metadata:
labels:
component: elasticsearch
spec:
serviceAccount: elasticsearch
initContainers:
- name: init-sysctl
image: busybox
imagePullPolicy: IfNotPresent
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
containers:
- name: es
securityContext:
capabilities:
add:
- IPC_LOCK
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.0
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: "CLUSTER_NAME"
value: "myesdb"
- name: "DISCOVERY_SERVICE"
value: "elasticsearch"
- name: NODE_MASTER
value: "true"
- name: NODE_DATA
value: "true"
- name: HTTP_ENABLE
value: "true"
- name: ES_JAVA_OPTS
value: "-Xms256m -Xmx256m"
- name: "path.data"
value: "/data/elk/data"
- name: "path.logs"
value: "/data/elk/log"
ports:
- containerPort: 9200
name: http
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- mountPath: /data/elk/
Those values path.data and path.logs are not environment variables. They are config options.
The default path.data for the official elasticsearch image is /usr/share/elasticsearch/data based on the default value of ES_HOME=/usr/share/elasticsearch/ If you don't want to use that path you have to override it in the elasticsearch.yaml config.
You will have to create a ConfigMap containing your elasticsearch.yaml with something like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: elasticsearch-config
namespace: es
data:
elasticsearch.yml: |
cluster:
name: ${CLUSTER_NAME:elasticsearch-default}
node:
master: ${NODE_MASTER:true}
data: ${NODE_DATA:true}
name: ${NODE_NAME}
ingest: ${NODE_INGEST:true}
max_local_storage_nodes: ${MAX_LOCAL_STORAGE_NODES:1}
processors: ${PROCESSORS:1}
network.host: ${NETWORK_HOST:_site_}
path:
data: ${DATA_PATH:"/data/elk"}
repo: ${REPO_LOCATIONS:[]}
bootstrap:
memory_lock: ${MEMORY_LOCK:false}
http:
enabled: ${HTTP_ENABLE:true}
compression: true
cors:
enabled: true
allow-origin: "*"
discovery:
zen:
ping.unicast.hosts: ${DISCOVERY_SERVICE:elasticsearch-discovery}
minimum_master_nodes: ${NUMBER_OF_MASTERS:1}
xpack:
license.self_generated.type: basic
(Note that the above ConfigMap will also allow you to use the DATA_PATH environment variable)
Then mount your volumes in your Pod with something like this:
volumeMounts:
- name: storage
mountPath: /data/elk
- name: config-volume
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
subPath: elasticsearch.yml
volumes:
- name: config-volume
configMap:
name: elasticsearch-config
- name: storage
<add-whatever-volume-you-are-using-for-data>

pod spring boot(jhipster) not connect cloud SQL

I have tried to connect from a pod (jhipster) to a Google cloud SQL but I have not been successful.
My pod is left in CrashLoopBackOff because Cloud SQL can not connect Error:
org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IPconnections.atorg.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:280)atorg.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)......ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'liquibase' defined in class path resource [cl/databin/invoicing/folio/config/LiquibaseConfiguration.class]: Invocation of init method failed; nested exception is liquibase.exception.DatabaseException: org.postgresql.util.PSQLException: Connection to localhost:5432 refused.
my folio-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: folio
namespace: jhipster
spec:
replicas: 2
selector:
matchLabels:
app: folio
version: "v1"
template:
metadata:
labels:
app: folio
version: "v1"
spec:
containers:
- name: folio-app
image: skilledboy/folio:v1
env:
- name: SPRING_PROFILES_ACTIVE
value: prod
- name: JHIPSTER_SECURITY_AUTHENTICATION_JWT_BASE64_SECRET
valueFrom:
secretKeyRef:
name: jwt-secret
key: secret
- name: SPRING_DATASOURCE_URL
value: jdbc:postgresql://localhost:5432/folio
- name: POSTGRES_DB_USER
value: user
- name: POSTGRES_DB_PASSWORD
value: password1
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=invo-project-233618:us-central1:folios=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-oauth-credential
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: SPRING_SLEUTH_PROPAGATION_KEYS
value: "x-request-id,x-ot-span-context"
- name: JAVA_OPTS
value: " -Xmx256m -Xms256m"
resources:
requests:
memory: "256Mi"
cpu: "500m"
limits:
memory: "512Mi"
cpu: "1"
ports:
- name: http
containerPort: 8081
readinessProbe:
httpGet:
path: /folio/management/health
port: http
initialDelaySeconds: 20
periodSeconds: 15
failureThreshold: 6
livenessProbe:
httpGet:
path: /folio/management/health
port: http
initialDelaySeconds: 120
volumes:
- name: cloudsql-oauth-credential
secret:
secretName: cloudsql-oauth-credential
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
and in the configuration of my application-prod.yml
datasource:
type: com.zaxxer.hikari.HikariDataSource
url: jdbc:postgresql://127.0.0.1:5432/folio
username: ${POSTGRES_DB_USER}
password: ${POSTGRES_DB_PASSWORD}
What will I have wrong? someone to give me an idea that I can have bad? thanks
Your problem is that you are telling the Cloud SQL proxy to run with -credential_file=/secrets/cloudsql/credentials.json, but you haven't actually provided a file at /secrets/cloudsql/ for it to use. (The volume in your config is at /etc/ssl/certs).
It's also worth pointing out that the credential_file flag is for using a service account key, and token flag is used for an oauth token (it's unclear which you are trying to use)

Cannot access Kibana dashboard

I am trying to deploy Kibana in my Kubernetes cluster which is on AWS. To access the Kibana dashboard I have created an ingress which is mapped to xyz.com. Here is my Kibana deployment file.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: kibana
labels:
component: kibana
spec:
replicas: 1
selector:
matchLabels:
component: kibana
template:
metadata:
labels:
component: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana-oss:6.3.2
env:
- name: CLUSTER_NAME
value: myesdb
- name: SERVER_BASEPATH
value: /
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 5601
name: http
readinessProbe:
httpGet:
path: /api/status
port: http
initialDelaySeconds: 20
timeoutSeconds: 5
volumeMounts:
- name: config
mountPath: /usr/share/kibana/config
readOnly: true
volumes:
- name: config
configMap:
name: kibana-config
Whenever I deploy it, it gives me the following error. What should my SERVER_BASEPATH be in order for it to work? I know it defaults to /app/kibana.
FATAL { ValidationError: child "server" fails because [child "basePath" fails because ["basePath" with value "/" fails to match the start with a slash, don't end with one pattern]]
at Object.exports.process (/usr/share/kibana/node_modules/joi/lib/errors.js:181:19)
at internals.Object._validateWithOptions (/usr/share/kibana/node_modules/joi/lib/any.js:651:31)
at module.exports.internals.Any.root.validate (/usr/share/kibana/node_modules/joi/lib/index.js:121:23)
at Config._commit (/usr/share/kibana/src/server/config/config.js:119:35)
at Config.set (/usr/share/kibana/src/server/config/config.js:89:10)
at Config.extendSchema (/usr/share/kibana/src/server/config/config.js:62:10)
at _lodash2.default.each.child (/usr/share/kibana/src/server/config/config.js:51:14)
at arrayEach (/usr/share/kibana/node_modules/lodash/index.js:1289:13)
at Function.<anonymous> (/usr/share/kibana/node_modules/lodash/index.js:3345:13)
at Config.extendSchema (/usr/share/kibana/src/server/config/config.js:50:31)
at new Config (/usr/share/kibana/src/server/config/config.js:41:10)
at Function.withDefaultSchema (/usr/share/kibana/src/server/config/config.js:34:12)
at KbnServer.exports.default (/usr/share/kibana/src/server/config/setup.js:9:37)
at KbnServer.mixin (/usr/share/kibana/src/server/kbn_server.js:136:16)
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:188:7)
isJoi: true,
name: 'ValidationError',
details:
[ { message: '"basePath" with value "/" fails to match the start with a slash, don\'t end with one pattern',
path: 'server.basePath',
type: 'string.regex.name',
context: [Object] } ],
_object:
{ pkg:
{ version: '6.3.2',
branch: '6.3',
buildNum: 17307,
buildSha: '53d0c6758ac3fb38a3a1df198c1d4c87765e63f7' },
dev: { basePathProxyTarget: 5603 },
pid: { exclusive: false },
cpu: { cgroup: [Object] },
cpuacct: { cgroup: [Object] },
server: { name: 'kibana', host: '0', basePath: '/' } },
annotate: [Function] }
I followed this guide https://github.com/pires/kubernetes-elasticsearch-cluster
Any idea what might be the issue ?
I believe that the example config in the official kibana repository gives a hint on the cause of this problem, here's the server.basePath setting:
# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""
The fact that the server.basePath cannot end in a slash could mean that kibana interprets your setting as ending in a slash basically. I've not dug deeper into this though.
This error message is interesting:
message: '"basePath" with value "/" fails to match the start with a slash, don\'t end with one pattern'
So this error message are a complement to the documentation: don't end in a slash and don't start with a slash. Something like that.
I reproduced this in minikube using your Deployment manifest but i removed the volume mount parts at the end. Changing SERVER_BASEPATH to /<SOMETHING> works fine, so basically i think you just need to set a proper basepath.

Resources