YACE exporter for CloudWatch Metrics does not work - aws-lambda

I want to export the CloudWatch Metrics for AWS Lambda functions and hence configured the YACE exporter following this link
Configured this as a cronJob as shown below. I end up with error about Missing region even though I have specified region.
Here's the cronjob:
apiVersion: v1
kind: Namespace
metadata:
name: tools
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: cloudwatch-metrics-exporter
labels:
app: cloudwatch-exporter
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
metadata:
labels:
app: cloudwatch-exporter
spec:
volumes:
- configMap:
defaultMode: 420
name: yace-lambda-config
name: yace-lambda-config
- secret:
defaultMode: 420
secretName: cloudwatch-metrics-exporter-secrets
name: cloudwatch-credentials
containers:
- name: yace
image: quay.io/invisionag/yet-another-cloudwatch-exporter:v0.16.0-alpha
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5000
volumeMounts:
- name: yace-lambda-config
mountPath: /tmp/config.yml
subPath: config.yml
resources:
limits:
memory: "128Mi"
cpu: "500m"
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
securityContext:
capabilities:
drop:
- ALL
privileged: false
runAsUser: 1000
runAsNonRoot: true
readOnlyRootFilesystem: false
allowPrivilegeEscalation: false
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
automountServiceAccountToken: true
shareProcessNamespace: false
securityContext:
runAsNonRoot: false
seccompProfile:
type: RuntimeDefault
restartPolicy: OnFailure
---
apiVersion: v1
kind: ConfigMap
metadata:
name: yace-lambda-config
namespace: tools
data:
config.yml: |
discovery:
jobs:
- type: lambda
regions:
- eu-central-1
enableMetricData: true
metrics:
- name: Duration
statistics: [ Sum, Maximum, Minimum, Average ]
period: 300
length: 3600
- name: Invocations
statistics: [ Sum ]
period: 300
length: 3600
- name: Errors
statistics: [ Sum ]
period: 300
length: 3600
- name: Throttles
statistics: [ Sum ]
period: 300
length: 3600
- name: DeadLetterErrors
statistics: [ Sum ]
period: 300
length: 3600
- name: DestinationDeliveryFailures
statistics: [ Sum ]
period: 300
length: 3600
- name: ProvisionedConcurrencyInvocations
statistics: [ Sum ]
period: 300
length: 3600
- name: ProvisionedConcurrencySpilloverInvocations
statistics: [ Sum ]
period: 300
length: 3600
- name: IteratorAge
statistics: [ Average, Maximum ]
period: 300
length: 3600
- name: ConcurrentExecutions
statistics: [ Sum ]
period: 300
length: 3600
- name: ProvisionedConcurrentExecutions
statistics: [ Sum ]
period: 300
length: 3600
- name: ProvisionedConcurrencyUtilization
statistics:
- Maximum
period: 300
length: 3600
- name: UnreservedConcurrentExecutions
statistics: [ Sum ]
period: 300
length: 3600
Error:
2022/12/16 08:54:28 ERROR: unable to resolve endpoint for service "tagging", region "", err: UnknownEndpointError: could not resolve endpoint
partition: "aws", service: "tagging", region: "", known: [us-east-2 us-west-1 ap-northeast-1 ap-southeast-1 eu-west-1 eu-west-2 ap-southeast-2 eu-west-3 us-west-2 sa-east-1 us-east-1 ap-east-1 ap-northeast-2 ca-central-1 me-south-1 ap-south-1 eu-central-1 eu-north-1]
What am I doing wrong?
Update: Turns out it was indentation problem with my yaml. I resolved that and now have a different issue
Here's the error:
{"level":"warning","msg":"NoCredentialProviders: no valid providers in chain. Deprecated.\n\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors","time":"2022-12-16T16:35:47Z"}
{"level":"info","msg":"Couldn't describe resources for region eu-central-1: NoCredentialProviders: no valid providers in chain. Deprecated.\n\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors\n","time":"2022-12-16T16:35:47Z"}
I have provided AWS credentials via ENV variables by doing an export of necessary AWS credentials and then did kubectl apply config.yaml

Related

Getting "event.agent_id_status auth_metadata_missing" error while sending logs from standalone elasticsearch agent

I am trying to send the logs from my AKS cluster into Elasticsearch the log that I am getting is "event.agent_id_status auth_metadata_missing" in my kibana even after all the volume mounts are done correctly
here's my configmap for standalone elastic agent
apiVersion: v1
kind: ConfigMap
metadata:
name: agent-node-datastreams
namespace: elastic
labels:
k8s-app: elastic-agent-standalone
data:
agent.yml: |-
outputs:
default:
type: elasticsearch
protocol: https
ssl.verification_mode: 'none'
allow_older_versions: true
hosts:
- >-
${ES_HOST}
username: ${ES_USERNAME}
password: ${ES_PASSWORD}
indices:
- index: "journalbeat-alias"
when:
and:
- has_fields: ['fields.k8s.component']
- equals:
fields.k8s.component: "journal"
- index: "test-audit"
when:
and:
- has_fields: ['fields.k8s.component']
- equals:
fields.k8s.component: "audit"
- index: "filebeat-%{[beat.version]}-%{+yyyy.MM.dd}"
when.not:
has_fields: ['kubernetes.namespace']
agent:
monitoring:
enabled: true
use_output: default
logs: true
metrics: false
providers.kubernetes:
node: ${NODE_NAME}
scope: node
#Uncomment to enable hints' support
#hints.enabled: true
inputs:
- name: system-logs
type: logfile
use_output: default
meta:
package:
name: system
version: 0.10.7
data_stream:
namespace: filebeat
streams:
- data_stream:
dataset: audit
type: logfile
paths:
- /var/log/*.log
exclude_files:
- .gz$
multiline:
pattern: ^\s
match: after
processors:
- add_fields:
target: ''
fields:
ecs.version: 1.12.0
- data_stream:
dataset: journald
type: logfile
paths:
- /var/lib/host/log/journal
exclude_files:
- .gz$
multiline:
pattern: ^\s
match: after
processors:
- add_fields:
target: ''
fields:
ecs.version: 1.12.0
- data_stream:
dataset: container
type: logfile
paths:
- /var/log/containers/*.log
exclude_files:
- .gz$
multiline:
pattern: ^\s
match: after
processors:
- add_fields:
target: ''
fields:
ecs.version: 1.12.0
and here's my daemonset file
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: elastic-agent-standalone
namespace: elastic
labels:
app: elastic-agent-standalone
spec:
selector:
matchLabels:
app: elastic-agent-standalone
template:
metadata:
labels:
app: elastic-agent-standalone
spec:
# Tolerations are needed to run Elastic Agent on Kubernetes control-plane nodes.
# Agents running on control-plane nodes collect metrics from the control plane components (scheduler, controller manager) of Kubernetes
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
- key: node-role.kubernetes.io/master
effect: NoSchedule
serviceAccountName: elastic-agent-standalone
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: elastic-agent-standalone
image: docker.elastic.co/beats/elastic-agent:8.4.3
args: [
"-c", "/etc/elastic-agent/agent.yml",
"-e",
]
env:
# The basic authentication username used to connect to Elasticsearch
# This user needs the privileges required to publish events to Elasticsearch.
- name: FLEET_ENROLL_INSECURE
value: "1"
- name: ES_USERNAME
value: <CORRECT USER>
# The basic authentication password used to connect to Elasticsearch
- name: ES_PASSWORD
value: <MY CORRECT PASSWORD>
# The Elasticsearch host to communicate with
- name: ES_HOST
value: <CORRECT HOST>
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: STATE_PATH
value: "/etc/elastic-agent"
securityContext:
runAsUser: 0
resources:
limits:
memory: 700Mi
requests:
cpu: 100m
memory: 400Mi
volumeMounts:
- name: datastreams
mountPath: /etc/elastic-agent/agent.yml
readOnly: true
subPath: agent.yml
- name: proc
mountPath: /hostfs/proc
readOnly: true
- name: cgroup
mountPath: /hostfs/sys/fs/cgroup
readOnly: true
- name: varlogcontainers
mountPath: /var/log/containers
readOnly: true
- name: varlogpods
mountPath: /var/log/pods
readOnly: true
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: kubenodevarlogs
mountPath: /var/lib/host/log
readOnly: true
# - name: varlog
# mountPath: /var/log
# readOnly: true
- name: etc-full
mountPath: /hostfs/etc
readOnly: true
- name: var-lib
mountPath: /hostfs/var/lib
readOnly: true
volumes:
- name: datastreams
configMap:
defaultMode: 0640
name: agent-node-datastreams
- name: proc
hostPath:
path: /proc
- name: cgroup
hostPath:
path: /sys/fs/cgroup
- name: varlogcontainers
hostPath:
path: /var/log/containers
- name: varlogpods
hostPath:
path: /var/log/pods
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: kubenodevarlogs
hostPath:
path: /var/log
# The following volumes are needed for Cloud Security Posture integration (cloudbeat)
# If you are not using this integration, then these volumes and the corresponding
# mounts can be removed.
- name: etc-full
hostPath:
path: /etc
- name: var-lib
hostPath:
path: /var/lib
and this is the log that i get in kibana for path /var/log/containers but it's same for all the other inputs, and there are no datastreams getting generated for either path

Filebeat not harvesting logs with autodiscover

I'm having an issue with Filebeat on an environment which suddenly stopped sending logs to elasticsearch. On both environments we have the same setup but on this one it just stopped.. Filebeat, ElasticSearch and Kibana version 7.15.0 all helm deployments
/var/lib/docker/containers/ is empty on the filebeat container but so is in the other working environment..
Filebeat logs:
2022-07-02T16:56:12.731Z DEBUG [input] input/input.go:139 Run input
2022-07-02T16:56:12.731Z DEBUG [input] log/input.go:215 Start next scan {"input_id": "31e0e6d8-e599-453a-a8d0-69afdf5b52d6"}
2022-07-02T16:56:12.731Z DEBUG [input] log/input.go:279 input states cleaned up. Before: 0, After: 0, Pending: 0 {"input_id": "31e0e6d8-e599-453a-a8d0-69afdf5b52d6"}
2022-07-02T16:56:12.976Z DEBUG [input] input/input.go:139 Run input
2022-07-02T16:56:12.976Z DEBUG [input] log/input.go:215 Start next scan {"input_id": "89b55ab8-8fb3-49c4-9d9e-2372c956cf49"}
2022-07-02T16:56:12.977Z DEBUG [input] log/input.go:279 input states cleaned up. Before: 0, After: 0, Pending: 0 {"input_id": "89b55ab8-8fb3-49c4-9d9e-2372c956cf49"}
2022-07-02T16:56:13.074Z DEBUG [input] input/input.go:139 Run input
2022-07-02T16:56:13.074Z DEBUG [input] input/input.go:139 Run input
2022-07-02T16:56:13.074Z DEBUG [input] input/input.go:139 Run input
2022-07-02T16:56:13.074Z DEBUG [input] log/input.go:215 Start next scan {"input_id": "ac5b2c6d-189a-420a-bb00-f9d9e6d5aef7"}
2022-07-02T16:56:13.074Z DEBUG [input] log/input.go:215 Start next scan {"input_id": "be885467-72ea-44c1-bdce-cdd91fb03e79"}
2022-07-02T16:56:13.074Z DEBUG [input] log/input.go:215 Start next scan {"input_id": "1fa30d44-77e8-42ec-8d22-55abd4f8f60b"}
2022-07-02T16:56:13.074Z DEBUG [input] input/input.go:139 Run input
2022-07-02T16:56:13.074Z DEBUG [input] log/input.go:279 input states cleaned up. Before: 0, After: 0, Pending: 0 {"input_id": "ac5b2c6d-189a-420a-bb00-f9d9e6d5aef7"}
2022-07-02T16:56:13.074Z DEBUG [input] log/input.go:279 input states cleaned up. Before: 0, After: 0, Pending: 0 {"input_id": "1fa30d44-77e8-42ec-8d22-55abd4f8f60b"}
2022-07-02T16:56:13.074Z DEBUG [input] log/input.go:279 input states cleaned up. Before: 0, After: 0, Pending: 0 {"input_id": "be885467-72ea-44c1-bdce-cdd91fb03e79"}
Inside the filebeat container:
ls data/registry/filebeat
log.json
meta.json
cat logs/filebeat
2022-07-02T17:37:30.639Z INFO instance/beat.go:665 Home path: [/usr/share/filebeat] Config path: [/usr/share/filebeat] Data path: [/usr/share/filebeat/data] Logs path: [/usr/share/filebeat/logs]
2022-07-02T17:37:30.640Z DEBUG [beat] instance/beat.go:723 Beat metadata path: /usr/share/filebeat/data/meta.json
2022-07-02T17:37:30.640Z INFO instance/beat.go:673 Beat ID: b0e19db9-df61-4eec-9a95-1cd5ef653718
2022-07-02T17:37:30.640Z INFO [index-management] idxmgmt/std.go:184 Set output.elasticsearch.index to 'filebeat-7.15.0' as ILM is enabled.
2022-07-02T17:37:30.641Z INFO [esclientleg] eslegclient/connection.go:100 elasticsearch url: http://elasticsearch.logging:9200
2022-07-02T17:37:30.740Z DEBUG [esclientleg] eslegclient/connection.go:249 ES Ping(url=http://elasticsearch.logging:9200)
2022-07-02T17:37:30.742Z DEBUG [esclientleg] transport/logging.go:41 Completed dialing successfully {"network": "tcp", "address": "elasticsearch.logging:9200"}
2022-07-02T17:37:30.743Z DEBUG [esclientleg] eslegclient/connection.go:272 Ping status code: 200
2022-07-02T17:37:30.743Z INFO [esclientleg] eslegclient/connection.go:273 Attempting to connect to Elasticsearch version 7.15.0
2022-07-02T17:37:30.743Z DEBUG [esclientleg] eslegclient/connection.go:328 GET http://elasticsearch.logging:9200/_license?human=false <nil>
cat data/meta.json
{"uuid":"b0e19db9-df61-4eec-9a95-1cd5ef653718","first_start":"2022-05-29T00:10:26.137238912Z"}
ls data/registry/filebeat
log.json
meta.json
cat data/registry/filebeat/log.json
cat data/registry/filebeat/meta.json
{"version":"1"}
apiVersion: v1
kind: Pod
metadata:
annotations:
cni.projectcalico.org/containerID: 1e66a1c066aa10de73834586c605c7adf71b2c652498b0de7a9d94b44633f919
cni.projectcalico.org/podIP: 10.0.4.120/32
cni.projectcalico.org/podIPs: 10.0.4.120/32
co.elastic.logs/enabled: "false"
configChecksum: 9e8011c4cd9f9bf36cafe98af8e7862345164b1c11f062f4ab9a67492248076
kubectl.kubernetes.io/restartedAt: "2022-04-14T16:22:07+03:00"
creationTimestamp: "2022-07-01T13:53:29Z"
generateName: filebeat-filebeat-
labels:
app: filebeat-filebeat
chart: filebeat-7.15.0
controller-revision-hash: 79bdd78b56
heritage: Helm
pod-template-generation: "21"
release: filebeat
name: filebeat-filebeat-95l2d
namespace: logging
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: DaemonSet
name: filebeat-filebeat
uid: 343f6f76-ffde-11e9-bf3f-42010a9c01ac
resourceVersion: "582889515"
uid: 916d7dc9-f4b2-498a-9963-91213f568560
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchFields:
- key: metadata.name
operator: In
values:
- ..mynode
containers:
- args:
- -e
- -E
- http.enabled=true
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: ELASTICSEARCH_HOSTS
value: elasticsearch.logging:9200
image: docker.elastic.co/beats/filebeat:7.15.0
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
curl --fail 127.0.0.1:5066
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: filebeat
readinessProbe:
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
filebeat test output
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources:
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 50m
memory: 50Mi
securityContext:
privileged: false
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /usr/share/filebeat/filebeat.yml
name: filebeat-config
readOnly: true
subPath: filebeat.yml
- mountPath: /usr/share/filebeat/my_ilm_policy.json
name: filebeat-config
readOnly: true
subPath: my_ilm_policy.json
- mountPath: /usr/share/filebeat/data
name: data
- mountPath: /var/lib/docker/containers
name: varlibdockercontainers
readOnly: true
- mountPath: /var/log
name: varlog
readOnly: true
- mountPath: /var/run/docker.sock
name: varrundockersock
readOnly: true
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-2gvbn
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: ..mynode
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: filebeat-filebeat
serviceAccountName: filebeat-filebeat
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/disk-pressure
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/memory-pressure
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/pid-pressure
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/unschedulable
operator: Exists
volumes:
- configMap:
defaultMode: 384
name: filebeat-filebeat-daemonset-config
name: filebeat-config
- hostPath:
path: /var/lib/filebeat-filebeat-logging-data
type: DirectoryOrCreate
name: data
- hostPath:
path: /var/lib/docker/containers
type: ""
name: varlibdockercontainers
- hostPath:
path: /var/log
type: ""
name: varlog
- hostPath:
path: /var/run/docker.sock
type: ""
name: varrundockersock
- name: kube-api-access-3axln
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
Actually it worked with another configuration posted on elastic.co website:
filebeat.autodiscover:
providers:
- type: kubernetes
hints.enabled: true
hints.default_config:
type: container
paths:
- /var/log/containers/*-${data.container.id}.log # CRI path
https://www.elastic.co/guide/en/beats/filebeat/current/configuration-autodiscover-hints.html
I'm still not sure why this happend suddenly but it the reason might be a container runtime change for kubernetes on the node but I don't have access to check that

how to add log.path and data.path to elastic.yaml for kuberenetes to deploy with operator

**
good day!
i try to deploy elastic with operator in Kubernetes with this code
**
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elastic-cluster
namespace: elastic-system
spec:
version: 7.16.3
nodeSets:
- name: data
count: 1
config:
node.master: false
node.data: true
node.ingest: false
xpack.ml.enabled: false
node.store.allow_mmap: false
podTemplate:
spec:
containers:
- name: elasticsearch
env:
# - name: ES_JAVA_OPTS
# value: -Xms1g -Xmx1g
- name: READINESS_PROBE_TIMEOUT
value: "30"
#resources:
# requests:
# memory: 1Gi
# cpu: 1
# limits:
# memory: 1Gi
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2G
**
but i dont know where in this file to mount the log.path and data.path and how to do it
any help?
**
*
italic
*

Elasticsearch pods failing at readiness probe

Elasticsearch pod is not becoming active.
logging-es-data-master-ilmz5zyt-3-deploy 1/1 Running 0 5m
logging-es-data-master-ilmz5zyt-3-qxkml 1/2 Running 0 5m
and events are.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m default-scheduler Successfully assigned logging-es-data-master-ilmz5zyt-3-qxkml to digi-srv-pp-01
Normal Pulled 5m kubelet, digi-srv-pp-01 Container image "docker.io/openshift/origin-logging-elasticsearch:v3.10" already present on machine
Normal Created 5m kubelet, digi-srv-pp-01 Created container
Normal Started 5m kubelet, digi-srv-pp-01 Started container
Normal Pulled 5m kubelet, digi-srv-pp-01 Container image "docker.io/openshift/oauth-proxy:v1.0.0" already present on machine
Normal Created 5m kubelet, digi-srv-pp-01 Created container
Normal Started 5m kubelet, digi-srv-pp-01 Started container
Warning Unhealthy 13s (x55 over 4m) kubelet, digi-srv-pp-01 Readiness probe failed: Elasticsearch node is not ready to accept HTTP requests yet [response code: 000]
Deployment config is
# oc export dc/logging-es-data-master-ilmz5zyt -o yaml
Command "export" is deprecated, use the oc get --export
apiVersion: v1
kind: DeploymentConfig
metadata:
creationTimestamp: null
generation: 5
labels:
component: es
deployment: logging-es-data-master-ilmz5zyt
logging-infra: elasticsearch
provider: openshift
name: logging-es-data-master-ilmz5zyt
spec:
replicas: 1
revisionHistoryLimit: 0
selector:
component: es
deployment: logging-es-data-master-ilmz5zyt
logging-infra: elasticsearch
provider: openshift
strategy:
activeDeadlineSeconds: 21600
recreateParams:
timeoutSeconds: 600
resources: {}
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
component: es
deployment: logging-es-data-master-ilmz5zyt
logging-infra: elasticsearch
provider: openshift
name: logging-es-data-master-ilmz5zyt
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: logging-infra
operator: In
values:
- elasticsearch
topologyKey: kubernetes.io/hostname
weight: 100
containers:
- env:
- name: DC_NAME
value: logging-es-data-master-ilmz5zyt
- name: NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: KUBERNETES_TRUST_CERTIFICATES
value: "true"
- name: SERVICE_DNS
value: logging-es-cluster
- name: CLUSTER_NAME
value: logging-es
- name: INSTANCE_RAM
value: 12Gi
- name: HEAP_DUMP_LOCATION
value: /elasticsearch/persistent/heapdump.hprof
- name: NODE_QUORUM
value: "1"
- name: RECOVER_EXPECTED_NODES
value: "1"
- name: RECOVER_AFTER_TIME
value: 5m
- name: READINESS_PROBE_TIMEOUT
value: "30"
- name: POD_LABEL
value: component=es
- name: IS_MASTER
value: "true"
- name: HAS_DATA
value: "true"
- name: PROMETHEUS_USER
value: system:serviceaccount:openshift-metrics:prometheus
image: docker.io/openshift/origin-logging-elasticsearch:v3.10
imagePullPolicy: IfNotPresent
name: elasticsearch
ports:
- containerPort: 9200
name: restapi
protocol: TCP
- containerPort: 9300
name: cluster
protocol: TCP
readinessProbe:
exec:
command:
- /usr/share/java/elasticsearch/probe/readiness.sh
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 120
resources:
limits:
memory: 12Gi
requests:
cpu: "1"
memory: 12Gi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/elasticsearch/secret
name: elasticsearch
readOnly: true
- mountPath: /usr/share/java/elasticsearch/config
name: elasticsearch-config
readOnly: true
- mountPath: /elasticsearch/persistent
name: elasticsearch-storage
- args:
- --upstream-ca=/etc/elasticsearch/secret/admin-ca
- --https-address=:4443
- -provider=openshift
- -client-id=system:serviceaccount:openshift-logging:aggregated-logging-elasticsearch
- -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token
- -cookie-secret=endzaVczSWMzb0NoNlVtVw==
- -basic-auth-password=NXd9xTjg4npjIM0E
- -upstream=https://localhost:9200
- '-openshift-sar={"namespace": "openshift-logging", "verb": "view", "resource":
"prometheus", "group": "metrics.openshift.io"}'
- '-openshift-delegate-urls={"/": {"resource": "prometheus", "verb": "view",
"group": "metrics.openshift.io", "namespace": "openshift-logging"}}'
- --tls-cert=/etc/tls/private/tls.crt
- --tls-key=/etc/tls/private/tls.key
- -pass-access-token
- -pass-user-headers
image: docker.io/openshift/oauth-proxy:v1.0.0
imagePullPolicy: IfNotPresent
name: proxy
ports:
- containerPort: 4443
name: proxy
protocol: TCP
resources:
limits:
memory: 64Mi
requests:
cpu: 100m
memory: 64Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/tls/private
name: proxy-tls
readOnly: true
- mountPath: /etc/elasticsearch/secret
name: elasticsearch
readOnly: true
dnsPolicy: ClusterFirst
nodeSelector:
node-role.kubernetes.io/compute: "true"
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
supplementalGroups:
- 65534
serviceAccount: aggregated-logging-elasticsearch
serviceAccountName: aggregated-logging-elasticsearch
terminationGracePeriodSeconds: 30
volumes:
- name: proxy-tls
secret:
defaultMode: 420
secretName: prometheus-tls
- name: elasticsearch
secret:
defaultMode: 420
secretName: logging-elasticsearch
- configMap:
defaultMode: 420
name: logging-elasticsearch
name: elasticsearch-config
- name: elasticsearch-storage
persistentVolumeClaim:
claimName: logging-es-0
test: false
triggers: []
status:
availableReplicas: 0
latestVersion: 0
observedGeneration: 0
replicas: 0
unavailableReplicas: 0
updatedReplicas: 0

ECK - how to delete nodeset?

Here is my Elasticsearch yaml:
---
# Source: elastic/templates/elastic.yaml
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: ichat-els-deployment
spec:
# updateStrategy:
# changeBudget:
# maxSurge: -1
# maxUnavailable: -1
version: 7.11.1
auth:
roles:
- secretName: elastic-roles-secret
fileRealm:
- secretName: elastic-filerealm-secret
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
storageClassName: ""
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
volumeName: elasticsearch-azure-pv
podTemplate:
spec:
initContainers:
- name: install-plugins
command:
- sh
- -c
- |
bin/elasticsearch-plugin install --batch ingest-attachment
- name: default2
count: 0
config:
node.store.allow_mmap: false
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
After creating this, I have the 2 nodesets running, kubectl get pods:
NAME READY STATUS RESTARTS AGE
elastic-operator-0 1/1 Running 8 7d23h
ichat-els-deployment-es-default-0 1/1 Running 0 24m
ichat-els-deployment-es-default2-0 1/1 Running 0 26m
Everything is working fine, but now I want to delete the default2 nodeset, how can I do that?
I tried removing the nodeset from the manifest and reapply it, but nothing happened:
---
# Source: elastic/templates/elastic.yaml
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: ichat-els-deployment
spec:
# updateStrategy:
# changeBudget:
# maxSurge: -1
# maxUnavailable: -1
version: 7.11.1
auth:
roles:
- secretName: elastic-roles-secret
fileRealm:
- secretName: elastic-filerealm-secret
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
storageClassName: ""
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
volumeName: elasticsearch-azure-pv
podTemplate:
spec:
initContainers:
- name: install-plugins
command:
- sh
- -c
- |
bin/elasticsearch-plugin install --batch ingest-attachment
The pods and shards are still running and there are no errors in elastic operator. What is the correct way to remove a nodeset? Thanks.
I solved that issue with elasticsearch Kind object deletion:
kubectl delete elasticsearch <elasticsearch_object_name>
so all objects (StatefulSet -> Pods, Secrets, PVCs -> PVs, etc. ) related to elasticsearch CRD object were deleted due to it, but I didn't care about data loss here.

Resources