Installing Logstash on Kubernetes and Sending logs to AWS ElasticSearch - elasticsearch
I am trying to set up a monitoring solution for apps running on Kubernetes via AWS elasticsearch.
To ship logs I am using filebeat --> Logstash --> AWS ElasticSearch and it's proving to be a nightmare so far :(
To ship logs from logstash, I need to use amazon_es output plugin but I am getting different errors
Below is the manifest file for logstash that I am using
kind: ConfigMap
metadata:
name: logstash-configmap
namespace: kube-system
data:
logstash.yml: |
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
logstash.conf: |
# all input will come from filebeat, no local logs
input {
beats {
port => 5044
}
}
filter {
if [message] =~ /^\{.*\}$/ {
json {
source => "message"
}
}
if [ClientHost] {
geoip {
source => "ClientHost"
}
}
}
output {
amazon_es {
hosts => [ "https://vpc-eks***********.es.amazonaws.com:443" ]
region => "eu-west-1"
index => "devtest-logs-%{+YYYY.MM.dd}"
}
}
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: logstash-deployment
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
image: docker.elastic.co/logstash/logstash:7.1.0
ports:
- containerPort: 5044
volumeMounts:
- name: config-volume
mountPath: /usr/share/logstash/config
- name: logstash-pipeline-volume
mountPath: /usr/share/logstash/pipeline
volumes:
- name: config-volume
configMap:
name: logstash-configmap
items:
- key: logstash.yml
path: logstash.yml
- name: logstash-pipeline-volume
configMap:
name: logstash-configmap
items:
- key: logstash.conf
path: logstash.conf
---
kind: Service
apiVersion: v1
metadata:
name: logstash-service
namespace: kube-system
spec:
selector:
app: logstash
ports:
- protocol: TCP
port: 5044
targetPort: 5044
type: ClusterIP
│ [INFO ] 2019-08-27 13:20:50.114 [LogStash::Runner] agent - No persistent UUID file found. Generating new UUID {:uuid=>"fb22e6cb-d7bb-4735-a05a-da4a2e4dabde", :path=>"/usr/share/logstash/data/uuid"} │
│ [ERROR] 2019-08-27 13:20:55.290 [Converge PipelineAction::Create<main>] registry - Tried to load a plugin's code, but failed. {:exception=>#<LoadError: no such file to load -- logstash/outputs/amazon_es>, :path=>"logstash/outp │
│ uts/amazon_es", :type=>"output", :name=>"amazon_es"} │
│ [ERROR] 2019-08-27 13:20:55.295 [Converge PipelineAction::Create<main>] agent - Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::PluginLoadingError", :message=>"Could │
│ n't find any output plugin named 'amazon_es'. Are you sure this is correct? Trying to load the amazon_es output plugin resulted in this error: no such file to load -- logstash/outputs/amazon_es", :backtrace=>["/usr/share/logst │
│ ash/logstash-core/lib/logstash/plugins/registry.rb:211:in `lookup_pipeline_plugin'", "/usr/share/logstash/logstash-core/lib/logstash/plugin.rb:137:in `lookup'", "org/logstash/plugins/PluginFactoryExt.java:200:in `plugin'", "or │
│ g/logstash/plugins/PluginFactoryExt.java:137:in `buildOutput'", "org/logstash/execution/JavaBasePipelineExt.java:50:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:23:in `initialize'", "/usr/ │
│ share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:36:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:325:in `block in converge_state'"]}
so far i haven't been able to find anything as the container goes into crash loop. Any help would be great
you need to install amazon_es plugin to fix this issue.
Include this in your dockerfile.
RUN logstash-plugin install logstash-output-amazon_es
Related
Minikube: Issue running Kibana
I am currently learning Kubernetes and I am using Minikube on MacOS using Docker Desktop, I am facing issues with running Kibana which seems to be failing to start and to also enable it through my nginx ingress controller. Regarding Kibana, it doesn't move to ready stage, it seems to be stuck and restarts several times. Everything lives inside the default namespace, except for fluentd that I use a persistent volume and persistent volume claim to access the shared /data/logs folder. I have added my fluentd, kibana, es and ingress yaml configuration. And also kibana logs below. Fluentd --- apiVersion: v1 kind: ServiceAccount metadata: name: fluentd namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: fluentd rules: - apiGroups: - "" resources: - pods - namespaces verbs: - get - list - watch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: fluentd roleRef: kind: ClusterRole name: fluentd apiGroup: rbac.authorization.k8s.io subjects: - kind: ServiceAccount name: fluentd namespace: kube-system --- apiVersion: apps/v1 kind: DaemonSet metadata: name: fluentd namespace: kube-system labels: k8s-app: fluentd-logging version: v1 spec: selector: matchLabels: k8s-app: fluentd-logging version: v1 template: metadata: labels: k8s-app: fluentd-logging version: v1 spec: containers: - name: fluentd image: fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch env: - name: FLUENT_ELASTICSEARCH_HOST value: "elasticsearch" - name: FLUENT_ELASTICSEARCH_PORT value: "9200" - name: FLUENT_ELASTICSEARCH_SCHEME value: "http" - name: FLUENT_UID value: "0" resources: limits: memory: 200Mi requests: cpu: 100m memory: 200Mi volumeMounts: - mountPath: /var/logs name: logs terminationGracePeriodSeconds: 30 volumes: - name: logs persistentVolumeClaim: claimName: chi-kube-pvc Kibana.yaml apiVersion: apps/v1 kind: Deployment metadata: name: kibana spec: selector: matchLabels: run: kibana template: metadata: labels: run: kibana spec: containers: - name: kibana image: docker.elastic.co/kibana/kibana:7.14.2 readinessProbe: httpGet: path: /kibana port: 5601 initialDelaySeconds: 5 periodSeconds: 10 livenessProbe: httpGet: path: /kibana port: 5601 initialDelaySeconds: 15 periodSeconds: 20 env: - name: XPACK_SECURITY_ENABLED value: "true" - name: SERVER_BASEPATH value: "/kibana" ports: - containerPort: 5601 volumeMounts: - mountPath: /var/logs name: logs volumes: - name: logs persistentVolumeClaim: claimName: chi-pvc --- apiVersion: v1 kind: Service metadata: name: kibana labels: service: kibana spec: type: NodePort selector: run: kibana ports: - port: 5601 targetPort: 5601 Kibana logs: {"type":"log","#timestamp":"2021-09-22T09:54:47+00:00","tags":["info","plugins-service"],"pid":1216,"message":"Plugin \"metricsEntities\" is disabled."} {"type":"log","#timestamp":"2021-09-22T09:54:47+00:00","tags":["warning","config","deprecation"],"pid":1216,"message":"You should set server.basePath along with server.rewriteBasePath. Starting in 7.0, Kibana will expect that all requests start with server.basePath rather than expecting you to rewrite the requests in your reverse proxy. Set server.rewriteBasePath to false to preserve the current behavior and silence this warning."} {"type":"log","#timestamp":"2021-09-22T09:54:47+00:00","tags":["warning","config","deprecation"],"pid":1216,"message":"Support for setting server.host to \"0\" in kibana.yml is deprecated and will be removed in Kibana version 8.0.0. Instead use \"0.0.0.0\" to bind to all interfaces."} {"type":"log","#timestamp":"2021-09-22T09:54:47+00:00","tags":["warning","config","deprecation"],"pid":1216,"message":"plugins.scanDirs is deprecated and is no longer used"} {"type":"log","#timestamp":"2021-09-22T09:54:47+00:00","tags":["warning","config","deprecation"],"pid":1216,"message":"Config key [monitoring.cluster_alerts.email_notifications.email_address] will be required for email notifications to work in 8.0.\""} {"type":"log","#timestamp":"2021-09-22T09:54:47+00:00","tags":["warning","config","deprecation"],"pid":1216,"message":"\"xpack.reporting.roles\" is deprecated. Granting reporting privilege through a \"reporting_user\" role will not be supported starting in 8.0. Please set \"xpack.reporting.roles.enabled\" to \"false\" and grant reporting privileges to users using Kibana application privileges **Management > Security > Roles**."} {"type":"log","#timestamp":"2021-09-22T09:54:48+00:00","tags":["info","http","server","NotReady"],"pid":1216,"message":"http server running at http://0.0.0.0:5601"} {"type":"log","#timestamp":"2021-09-22T09:54:48+00:00","tags":["info","plugins-system"],"pid":1216,"message":"Setting up [106] plugins: [translations,taskManager,licensing,globalSearch,globalSearchProviders,banners,licenseApiGuard,code,usageCollection,xpackLegacy,telemetryCollectionManager,telemetryCollectionXpack,kibanaUsageCollection,securityOss,share,screenshotMode,telemetry,newsfeed,mapsEms,mapsLegacy,legacyExport,kibanaLegacy,embeddable,uiActionsEnhanced,expressions,charts,esUiShared,bfetch,data,savedObjects,visualizations,visTypeXy,visTypeVislib,visTypeTimelion,features,visTypeTagcloud,visTypeTable,visTypePie,visTypeMetric,visTypeMarkdown,tileMap,regionMap,presentationUtil,timelion,home,searchprofiler,painlessLab,grokdebugger,graph,visTypeVega,management,watcher,licenseManagement,indexPatternManagement,advancedSettings,discover,discoverEnhanced,dashboard,dashboardEnhanced,visualize,visTypeTimeseries,savedObjectsManagement,spaces,security,transform,savedObjectsTagging,lens,reporting,canvas,lists,ingestPipelines,fileUpload,maps,dataVisualizer,encryptedSavedObjects,dataEnhanced,timelines,dashboardMode,cloud,upgradeAssistant,snapshotRestore,fleet,indexManagement,rollup,remoteClusters,crossClusterReplication,indexLifecycleManagement,enterpriseSearch,eventLog,actions,alerting,triggersActionsUi,stackAlerts,ruleRegistry,osquery,ml,cases,securitySolution,observability,uptime,infra,monitoring,logstash,console,apmOss,apm]"} {"type":"log","#timestamp":"2021-09-22T09:54:48+00:00","tags":["info","plugins","taskManager"],"pid":1216,"message":"TaskManager is identified by the Kibana UUID: 4f523c36-da1f-46e2-a071-84ee400bb9e7"} {"type":"log","#timestamp":"2021-09-22T09:54:48+00:00","tags":["warning","plugins","security","config"],"pid":1216,"message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."} {"type":"log","#timestamp":"2021-09-22T09:54:48+00:00","tags":["warning","plugins","security","config"],"pid":1216,"message":"Session cookies will be transmitted over insecure connections. This is not recommended."} {"type":"log","#timestamp":"2021-09-22T09:54:48+00:00","tags":["warning","plugins","reporting","config"],"pid":1216,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."} {"type":"log","#timestamp":"2021-09-22T09:54:48+00:00","tags":["warning","plugins","reporting","config"],"pid":1216,"message":"Chromium sandbox provides an additional layer of protection, but is not supported for Linux CentOS 8.4.2105\n OS. Automatically setting 'xpack.reporting.capture.browser.chromium.disableSandbox: true'."} {"type":"log","#timestamp":"2021-09-22T09:54:48+00:00","tags":["warning","plugins","encryptedSavedObjects"],"pid":1216,"message":"Saved objects encryption key is not set. This will severely limit Kibana functionality. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."} {"type":"log","#timestamp":"2021-09-22T09:54:48+00:00","tags":["warning","plugins","actions","actions"],"pid":1216,"message":"APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."} {"type":"log","#timestamp":"2021-09-22T09:54:48+00:00","tags":["warning","plugins","alerting","plugins","alerting"],"pid":1216,"message":"APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."} {"type":"log","#timestamp":"2021-09-22T09:54:48+00:00","tags":["info","plugins","ruleRegistry"],"pid":1216,"message":"Write is disabled, not installing assets"} {"type":"log","#timestamp":"2021-09-22T09:54:49+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations..."} {"type":"log","#timestamp":"2021-09-22T09:54:49+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"Starting saved objects migrations"} {"type":"log","#timestamp":"2021-09-22T09:54:49+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana] INIT -> OUTDATED_DOCUMENTS_SEARCH_OPEN_PIT. took: 226ms."} {"type":"log","#timestamp":"2021-09-22T09:54:49+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana] OUTDATED_DOCUMENTS_SEARCH_OPEN_PIT -> OUTDATED_DOCUMENTS_SEARCH_READ. took: 192ms."} {"type":"log","#timestamp":"2021-09-22T09:54:49+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana] OUTDATED_DOCUMENTS_SEARCH_READ -> OUTDATED_DOCUMENTS_SEARCH_CLOSE_PIT. took: 118ms."} {"type":"log","#timestamp":"2021-09-22T09:54:49+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana_task_manager] INIT -> OUTDATED_DOCUMENTS_SEARCH_OPEN_PIT. took: 536ms."} {"type":"log","#timestamp":"2021-09-22T09:54:49+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana] OUTDATED_DOCUMENTS_SEARCH_CLOSE_PIT -> UPDATE_TARGET_MAPPINGS. took: 86ms."} {"type":"log","#timestamp":"2021-09-22T09:54:49+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana_task_manager] OUTDATED_DOCUMENTS_SEARCH_OPEN_PIT -> OUTDATED_DOCUMENTS_SEARCH_READ. took: 86ms."} {"type":"log","#timestamp":"2021-09-22T09:54:50+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana_task_manager] OUTDATED_DOCUMENTS_SEARCH_READ -> OUTDATED_DOCUMENTS_SEARCH_CLOSE_PIT. took: 64ms."} {"type":"log","#timestamp":"2021-09-22T09:54:50+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana] UPDATE_TARGET_MAPPINGS -> UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK. took: 112ms."} {"type":"log","#timestamp":"2021-09-22T09:54:50+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana_task_manager] OUTDATED_DOCUMENTS_SEARCH_CLOSE_PIT -> UPDATE_TARGET_MAPPINGS. took: 49ms."} {"type":"log","#timestamp":"2021-09-22T09:54:50+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana_task_manager] UPDATE_TARGET_MAPPINGS -> UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK. took: 29ms."} {"type":"log","#timestamp":"2021-09-22T09:54:50+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana] UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK -> DONE. took: 106ms."} {"type":"log","#timestamp":"2021-09-22T09:54:50+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana] Migration completed after 840ms"} {"type":"log","#timestamp":"2021-09-22T09:54:50+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana_task_manager] UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK -> DONE. took: 104ms."} {"type":"log","#timestamp":"2021-09-22T09:54:50+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana_task_manager] Migration completed after 869ms"} {"type":"log","#timestamp":"2021-09-22T09:54:50+00:00","tags":["info","plugins-system"],"pid":1216,"message":"Starting [106] plugins: [translations,taskManager,licensing,globalSearch,globalSearchProviders,banners,licenseApiGuard,code,usageCollection,xpackLegacy,telemetryCollectionManager,telemetryCollectionXpack,kibanaUsageCollection,securityOss,share,screenshotMode,telemetry,newsfeed,mapsEms,mapsLegacy,legacyExport,kibanaLegacy,embeddable,uiActionsEnhanced,expressions,charts,esUiShared,bfetch,data,savedObjects,visualizations,visTypeXy,visTypeVislib,visTypeTimelion,features,visTypeTagcloud,visTypeTable,visTypePie,visTypeMetric,visTypeMarkdown,tileMap,regionMap,presentationUtil,timelion,home,searchprofiler,painlessLab,grokdebugger,graph,visTypeVega,management,watcher,licenseManagement,indexPatternManagement,advancedSettings,discover,discoverEnhanced,dashboard,dashboardEnhanced,visualize,visTypeTimeseries,savedObjectsManagement,spaces,security,transform,savedObjectsTagging,lens,reporting,canvas,lists,ingestPipelines,fileUpload,maps,dataVisualizer,encryptedSavedObjects,dataEnhanced,timelines,dashboardMode,cloud,upgradeAssistant,snapshotRestore,fleet,indexManagement,rollup,remoteClusters,crossClusterReplication,indexLifecycleManagement,enterpriseSearch,eventLog,actions,alerting,triggersActionsUi,stackAlerts,ruleRegistry,osquery,ml,cases,securitySolution,observability,uptime,infra,monitoring,logstash,console,apmOss,apm]"} {"type":"log","#timestamp":"2021-09-22T09:54:50+00:00","tags":["info","plugins","monitoring","monitoring"],"pid":1216,"message":"config sourced from: production cluster"} {"type":"log","#timestamp":"2021-09-22T09:54:51+00:00","tags":["info","http","server","Kibana"],"pid":1216,"message":"http server running at http://0.0.0.0:5601"} {"type":"log","#timestamp":"2021-09-22T09:54:52+00:00","tags":["info","plugins","monitoring","monitoring","kibana-monitoring"],"pid":1216,"message":"Starting monitoring stats collection"} {"type":"log","#timestamp":"2021-09-22T09:54:52+00:00","tags":["info","plugins","securitySolution"],"pid":1216,"message":"Dependent plugin setup complete - Starting ManifestTask"} {"type":"log","#timestamp":"2021-09-22T09:54:52+00:00","tags":["info","status"],"pid":1216,"message":"Kibana is now degraded"} {"type":"log","#timestamp":"2021-09-22T09:54:52+00:00","tags":["info","plugins","reporting"],"pid":1216,"message":"Browser executable: /usr/share/kibana/x-pack/plugins/reporting/chromium/headless_shell-linux_x64/headless_shell"} {"type":"log","#timestamp":"2021-09-22T09:54:52+00:00","tags":["warning","plugins","reporting"],"pid":1216,"message":"Enabling the Chromium sandbox provides an additional layer of protection."} {"type":"log","#timestamp":"2021-09-22T09:54:55+00:00","tags":["info","status"],"pid":1216,"message":"Kibana is now available (was degraded)"} {"type":"response","#timestamp":"2021-09-22T09:54:58+00:00","tags":[],"pid":1216,"method":"get","statusCode":404,"req":{"url":"/kibana","method":"get","headers":{"host":"172.17.0.3:5601","user-agent":"kube-probe/1.22","accept":"*/*","connection":"close"},"remoteAddress":"172.17.0.1","userAgent":"kube-probe/1.22"},"res":{"statusCode":404,"responseTime":36,"contentLength":60},"message":"GET /kibana 404 36ms - 60.0B"} {"type":"response","#timestamp":"2021-09-22T09:55:08+00:00","tags":[],"pid":1216,"method":"get","statusCode":404,"req":{"url":"/kibana","method":"get","headers":{"host":"172.17.0.3:5601","user-agent":"kube-probe/1.22","accept":"*/*","connection":"close"},"remoteAddress":"172.17.0.1","userAgent":"kube-probe/1.22"},"res":{"statusCode":404,"responseTime":25,"contentLength":60},"message":"GET /kibana 404 25ms - 60.0B"} {"type":"response","#timestamp":"2021-09-22T09:55:08+00:00","tags":[],"pid":1216,"method":"get","statusCode":404,"req":{"url":"/kibana","method":"get","headers":{"host":"172.17.0.3:5601","user-agent":"kube-probe/1.22","accept":"*/*","connection":"close"},"remoteAddress":"172.17.0.1","userAgent":"kube-probe/1.22"},"res":{"statusCode":404,"responseTime":22,"contentLength":60},"message":"GET /kibana 404 22ms - 60.0B"} {"type":"response","#timestamp":"2021-09-22T09:55:18+00:00","tags":[],"pid":1216,"method":"get","statusCode":404,"req":{"url":"/kibana","method":"get","headers":{"host":"172.17.0.3:5601","user-agent":"kube-probe/1.22","accept":"*/*","connection":"close"},"remoteAddress":"172.17.0.1","userAgent":"kube-probe/1.22"},"res":{"statusCode":404,"responseTime":26,"contentLength":60},"message":"GET /kibana 404 26ms - 60.0B"} {"type":"response","#timestamp":"2021-09-22T09:55:28+00:00","tags":[],"pid":1216,"method":"get","statusCode":404,"req":{"url":"/kibana","method":"get","headers":{"host":"172.17.0.3:5601","user-agent":"kube-probe/1.22","accept":"*/*","connection":"close"},"remoteAddress":"172.17.0.1","userAgent":"kube-probe/1.22"},"res":{"statusCode":404,"responseTime":27,"contentLength":60},"message":"GET /kibana 404 27ms - 60.0B"} {"type":"response","#timestamp":"2021-09-22T09:55:28+00:00","tags":[],"pid":1216,"method":"get","statusCode":404,"req":{"url":"/kibana","method":"get","headers":{"host":"172.17.0.3:5601","user-agent":"kube-probe/1.22","accept":"*/*","connection":"close"},"remoteAddress":"172.17.0.1","userAgent":"kube-probe/1.22"},"res":{"statusCode":404,"responseTime":22,"contentLength":60},"message":"GET /kibana 404 22ms - 60.0B"} {"type":"response","#timestamp":"2021-09-22T09:55:38+00:00","tags":[],"pid":1216,"method":"get","statusCode":404,"req":{"url":"/kibana","method":"get","headers":{"host":"172.17.0.3:5601","user-agent":"kube-probe/1.22","accept":"*/*","connection":"close"},"remoteAddress":"172.17.0.1","userAgent":"kube-probe/1.22"},"res":{"statusCode":404,"responseTime":25,"contentLength":60},"message":"GET /kibana 404 25ms - 60.0B"} Elasticsearch.yaml apiVersion: apps/v1 kind: Deployment metadata: name: elasticsearch spec: selector: matchLabels: component: elasticsearch template: metadata: labels: component: elasticsearch spec: containers: - name: elasticsearch image: docker.elastic.co/elasticsearch/elasticsearch:7.14.2 env: - name: discovery.type value: single-node ports: - containerPort: 9200 protocol: TCP resources: limits: cpu: 2 memory: 4Gi requests: cpu: 500m memory: 4Gi volumeMounts: - mountPath: /var/logs name: logs volumes: - name: logs persistentVolumeClaim: claimName: chi-pvc --- apiVersion: v1 kind: Service metadata: name: elasticsearch labels: service: elasticsearch spec: type: NodePort selector: component: elasticsearch ports: - port: 9200 targetPort: 9200 Ingress-resource.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: chi-ingress annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /healthz pathType: Prefix backend: service: name: chi-svc port: number: 3000 - path: /kibana pathType: Prefix backend: service: name: kibana port: number: 5601 - path: /elasticsearch pathType: Prefix backend: service: name: elasticsearch port: number: 9200
I ended up sorting out the issue by having different ingresses and removing the Nginx rewrite-target annotation, I went one step ahead and created a special namespace for the logging infrastructure. Namespace.yaml apiVersion: v1 kind: Namespace metadata: name: kube-logging Persistent-volume.yaml apiVersion: v1 kind: PersistentVolume metadata: name: chi-pv labels: type: local spec: storageClassName: manual capacity: storage: 5Gi accessModes: - ReadWriteMany hostPath: path: /data/logs/ type: DirectoryOrCreate --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: chi-pvc spec: storageClassName: manual accessModes: - ReadWriteMany resources: requests: storage: 2Gi selector: matchLabels: type: local Elastic-search.yaml apiVersion: apps/v1 kind: Deployment metadata: name: elasticsearch namespace: kube-logging labels: k8s-app: elasticsearch version: v1 spec: selector: matchLabels: k8s-app: elasticsearch version: v1 template: metadata: labels: k8s-app: elasticsearch version: v1 spec: containers: - name: elasticsearch image: docker.elastic.co/elasticsearch/elasticsearch:7.15.0 env: - name: discovery.type value: single-node - name: ES_JAVA_OPTS value: "-Xms512m -Xmx512m" ports: - containerPort: 9200 resources: limits: cpu: 500m memory: 4Gi requests: cpu: 500m memory: 4Gi --- apiVersion: v1 kind: Service metadata: name: elasticsearch namespace: kube-logging labels: k8s-app: elasticsearch version: v1 spec: type: NodePort selector: k8s-app: elasticsearch ports: - port: 9200 Fluentd.yaml --- apiVersion: v1 kind: ServiceAccount metadata: name: fluentd namespace: kube-logging --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: fluentd rules: - apiGroups: - "" resources: - pods - namespaces verbs: - get - list - watch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: fluentd roleRef: kind: ClusterRole name: fluentd apiGroup: rbac.authorization.k8s.io subjects: - kind: ServiceAccount name: fluentd namespace: kube-logging --- apiVersion: apps/v1 kind: DaemonSet metadata: name: fluentd namespace: kube-logging labels: k8s-app: fluentd-logging version: v1 spec: selector: matchLabels: k8s-app: fluentd-logging version: v1 template: metadata: labels: k8s-app: fluentd-logging version: v1 spec: serviceAccount: fluentd serviceAccountName: fluentd # Don't need this for Minikubes # tolerations: # - key: node-role.kubernetes.io/master # effect: NoSchedule containers: - name: fluentd image: fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch env: - name: FLUENT_ELASTICSEARCH_HOST # <hostname>.<namespace>.svc.cluster.local value: "elasticsearch.kube-logging.svc.cluster.local" - name: FLUENT_ELASTICSEARCH_PORT value: "9200" - name: FLUENT_ELASTICSEARCH_SCHEME value: "http" - name: FLUENTD_SYSTEMD_CONF value: 'disable' - name: FLUENT_LOGSTASH_FORMAT value: "true" # # X-Pack Authentication # # ===================== # - name: FLUENT_ELASTICSEARCH_USER # value: "elastic" # - name: FLUENT_ELASTICSEARCH_PASSWORD # value: "changeme" resources: limits: memory: 200Mi requests: cpu: 100m memory: 200Mi volumeMounts: - name: varlog mountPath: /var/log # When actual pod logs in /var/lib/docker/containers, the following lines should be used. - name: dockercontainerlogdirectory mountPath: /var/lib/docker/containers readOnly: true # When actual pod logs in /var/log/pods, the following lines should be used. # - name: dockercontainerlogdirectory # mountPath: /var/log/pods # readOnly: true terminationGracePeriodSeconds: 30 volumes: - name: varlog hostPath: path: /var/log # When actual pod logs in /var/lib/docker/containers, the following lines should be used. - name: dockercontainerlogdirectory hostPath: path: /var/lib/docker/containers # When actual pod logs in /var/log/pods, the following lines should be used. # - name: dockercontainerlogdirectory # hostPath: # path: /var/log/pods Kibana.yaml apiVersion: apps/v1 kind: Deployment metadata: name: kibana namespace: kube-logging labels: k8s-app: kibana version: v1 spec: selector: matchLabels: k8s-app: kibana version: v1 template: metadata: labels: k8s-app: kibana version: v1 spec: containers: - name: kibana image: docker.elastic.co/kibana/kibana:7.15.0 env: - name: SERVER_NAME value: kibana - name: SERVER_BASEPATH value: /kibana - name: SERVER_REWRITEBASEPATH value: "true" # - name: XPACK_SECURITY_ENABLED # value: "true" readinessProbe: httpGet: path: /kibana/api/status port: 5601 initialDelaySeconds: 5 periodSeconds: 10 livenessProbe: httpGet: path: /kibana/api/status port: 5601 initialDelaySeconds: 15 periodSeconds: 20 ports: - containerPort: 5601 --- apiVersion: v1 kind: Service metadata: name: kibana namespace: kube-logging labels: k8s-app: kibana version: v1 spec: type: NodePort selector: k8s-app: kibana ports: - port: 5601 targetPort: 5601 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: kibana-ingress namespace: kube-logging spec: rules: - host: logging.com http: paths: - path: / pathType: Prefix backend: service: name: kibana port: number: 5601
Logstash not able to connect to Elasticsearch deployed on Kubernetes cluster
I have deployed Logstash and elasticsearch pod on EKS cluster. When I am checking the logs for logstash pod it is showing unreachable elasticserach server. Though my elasticsearch is up and running. Please find below yaml files and log error. configMap.yaml apiVersion: v1 kind: ConfigMap metadata: name: "logstash-configmap-development" namespace: "development" labels: app: "logstash-development" data: logstash.conf: |- input { http { } } filter { json { source => "message" } } output { elasticsearch { hosts => ["https://my-server.com/elasticsearch-development/"] index => "%{[#metadata][beat]}-%{[#metadata][version]}-%{+YYYY.MM.dd}" } stdout { codec => rubydebug } } deployment.yaml --- apiVersion: "apps/v1" kind: "Deployment" metadata: name: "logstash-development" namespace: "development" spec: selector: matchLabels: app: "logstash-development" replicas: 1 strategy: type: "RollingUpdate" rollingUpdate: maxSurge: 1 maxUnavailable: 1 minReadySeconds: 5 template: metadata: labels: app: "logstash-development" spec: containers: - name: "logstash-development" image: "logstash:7.10.2" imagePullPolicy: "Always" env: - name: "XPACK_MONITORING_ELASTICSEARCH_HOSTS" value: "https://my-server.com/elasticsearch-development/" - name: "XPACK_MONITORING_ELASTICSEARCH_URL" value: "https://my-server.com/elasticsearch-development/" - name: "SERVER_BASEPATH" value: "logstash-development" securityContext: privileged: true ports: - containerPort: 8080 protocol: TCP volumeMounts: - name: "logstash-conf-volume" mountPath: "/usr/share/logstash/pipeline/" volumes: - name: "logstash-conf-volume" configMap: name: "logstash-configmap-development" items: - key: "logstash.conf" path: "logstash.conf" imagePullSecrets: - name: "logstash" service.yaml --- apiVersion: "v1" kind: "Service" metadata: name: "logstash-development" namespace: "development" labels: app: "logstash-development" spec: ports: - port: 55770 targetPort: 8080 selector: app: "logstash-development" Logstash pod log error [2021-06-09T08:22:38,708][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://my-server.com/elasticsearch-development/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://my-server.com/elasticsearch-development/][Manticore::ConnectTimeout] connect timed out"} Note:- Elasticsearch is up and running. And when I hit the logstash url it is giving as status ok. I have checked with elasticsearch cluster-ip, their logstash is able to connect with Elasticsearch, but when I am giving ingress path url it is not able to connect to elasticsearch. Also from the logs, I noticed it is taking incorrect url for elasticsearch. My elasticsearch url is something like this:- https://my-server.com/elasticserach but instead logstash is looking for https://my-server.com:9200/elasticsearch With this url (https://my-server.com:9200/elasticsearch) elasticsearch is not accessible as a result it is giving connection timeout. Can someone tell why it is taking (https://my-server.com:9200/elasticsearch) and not (https://my-server.com/elasticsearch)
I am now able to connect logstash with elasticsearch, if you are using elasticsearch with dns name, logstash by default will take the port of elasticsearch as 9200, so in my case it was taking the elasticsearch url as https://my-server.com:9200/elasticsearch-development/. But with that url, elasticsearch was not accessible, it was only accessible with (https://myserver.com/elasticsearch-development/). So I need to add the https port i.e 443 in my elasticsearch url, through which the logstash would be able to connect to elasticserach (https://my-server.com:443/elasticsearch-development/) Long story short:- In deployment.yaml file under env variable XPACK_MONITORING_ELASTICSEARCH_HOSTS and XPACK_MONITORING_ELASTICSEARCH_URL given the value as https://my-server.com:443/elasticsearch-development/ Same value was given in logstash.conf file.
How to deploy filebeat to fetch nginx logs with logstash in kubernetes?
I deplyed a nginx pod as deployment kind in k8s. Now I want to deploy filebeat and logstash in the same cluster to get nginx logs. Here are my manifest files. nginx.yaml --- apiVersion: v1 kind: Namespace metadata: name: logs --- apiVersion: apps/v1 kind: Deployment metadata: namespace: logs name: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: namespace: logs name: nginx labels: app: nginx spec: type: LoadBalancer ports: - port: 80 protocol: TCP targetPort: http selector: app: nginx filebeat.yaml --- apiVersion: v1 kind: Namespace metadata: name: logs --- apiVersion: v1 kind: ConfigMap metadata: name: filebeat-config namespace: logs labels: k8s-app: filebeat data: filebeat.yml: |- filebeat.autodiscover: providers: - type: kubernetes host: ${NODE_NAME} hints.enabled: true templates: - condition.contains: kubernetes.namespace: logs config: - module: nginx access: enabled: true var.paths: ["/var/log/nginx/access.log*"] subPath: access.log tags: ["access"] error: enabled: true var.paths: ["/var/log/nginx/error.log*"] subPath: error.log tags: ["error"] processors: - add_cloud_metadata: - add_host_metadata: cloud.id: ${ELASTIC_CLOUD_ID} cloud.auth: ${ELASTIC_CLOUD_AUTH} output.logstash: hosts: ["logstash:5044"] loadbalance: true index: filebeat --- apiVersion: apps/v1 kind: DaemonSet metadata: name: filebeat namespace: logs labels: k8s-app: filebeat spec: selector: matchLabels: k8s-app: filebeat template: metadata: labels: k8s-app: filebeat spec: serviceAccountName: filebeat terminationGracePeriodSeconds: 30 hostNetwork: true dnsPolicy: ClusterFirstWithHostNet containers: - name: filebeat image: docker.elastic.co/beats/filebeat:7.10.0 args: [ "-c", "/usr/share/filebeat/filebeat.yml", "-e", ] env: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName securityContext: runAsUser: 0 resources: limits: memory: 200Mi requests: cpu: 100m memory: 100Mi volumeMounts: - name: config mountPath: /etc/filebeat.yml subPath: filebeat.yml readOnly: true - name: data mountPath: /usr/share/filebeat/data - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true - name: varlog mountPath: /var/log readOnly: true volumes: - name: config configMap: defaultMode: 0600 name: filebeat-config - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers - name: varlog hostPath: path: /var/log - name: data hostPath: path: /var/lib/filebeat-data type: DirectoryOrCreate logstash.yaml --- apiVersion: v1 kind: Namespace metadata: name: logs --- apiVersion: v1 kind: Service metadata: namespace: logs labels: app: logstash name: logstash spec: ports: - name: "25826" port: 25826 targetPort: 25826 - name: "5044" port: 5044 targetPort: 5044 selector: app: logstash status: loadBalancer: {} --- apiVersion: v1 kind: ConfigMap metadata: namespace: logs name: logstash-configmap data: logstash.yml: | http.host: "0.0.0.0" path.config: /usr/share/logstash/pipeline logstash.conf: | input { beats { port => 5044 host => "0.0.0.0" } } filter { if [fileset][module] == "nginx" { if [fileset][name] == "access" { grok { match => { "message" => ["%{IPORHOST:[nginx][access][remote_ip]} - %{DATA:[nginx][access][user_name]} \[%{HTTPDATE:[nginx][access][time]}\] \"%{WORD:[nginx][access][method]} %{DATA:[nginx][access][url]} HTTP/%{NUMBER:[nginx][access][http_version]}\" %{NUMBER:[nginx][access][response_code]} %{NUMBER:[nginx][access][body_sent][bytes]} \"%{DATA:[nginx][access][referrer]}\" \"%{DATA:[nginx][access][agent]}\""] } remove_field => "message" } mutate { add_field => { "read_timestamp" => "%{#timestamp}" } } date { match => [ "[nginx][access][time]", "dd/MMM/YYYY:H:m:s Z" ] remove_field => "[nginx][access][time]" } useragent { source => "[nginx][access][agent]" target => "[nginx][access][user_agent]" remove_field => "[nginx][access][agent]" } geoip { source => "[nginx][access][remote_ip]" target => "[nginx][access][geoip]" } } else if [fileset][name] == "error" { grok { match => { "message" => ["%{DATA:[nginx][error][time]} \[%{DATA:[nginx][error][level]}\] %{NUMBER:[nginx][error][pid]}#%{NUMBER:[nginx][error][tid]}: (\*%{NUMBER:[nginx][error][connection_id]} )?%{GREEDYDATA:[nginx][error][message]}"] } remove_field => "message" } mutate { rename => { "#timestamp" => "read_timestamp" } } date { match => [ "[nginx][error][time]", "YYYY/MM/dd H:m:s" ] remove_field => "[nginx][error][time]" } } } } output { stdout { codec => rubydebug } } --- apiVersion: apps/v1 kind: StatefulSet metadata: name: logstash-nginx-to-gcs namespace: logs spec: serviceName: "logstash" selector: matchLabels: app: logstash updateStrategy: type: RollingUpdate template: metadata: labels: app: logstash spec: terminationGracePeriodSeconds: 10 volumes: - name: logstash-service-account-credentials secret: secretName: logstash-credentials containers: - name: logstash image: docker.elastic.co/logstash/logstash:7.10.0 volumeMounts: - name: logstash-service-account-credentials mountPath: /secrets/logstash readOnly: true resources: limits: memory: 2Gi volumeClaimTemplates: - metadata: name: logstash-nginx-to-gcs-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Ki I deployed them. But I'm not sure how filebeat can fetch the nginx log in a pod. It's a DaemonSet kind. When I check logstash's logs kubectl logs -f logstash-nginx-to-gcs-0 -n logs Using bundled JDK: /usr/share/logstash/jdk OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.jruby.ext.openssl.SecurityHelper (file:/tmp/jruby-1/jruby18310714590719622705jopenssl.jar) to field java.security.MessageDigest.provider WARNING: Please consider reporting this to the maintainers of org.jruby.ext.openssl.SecurityHelper WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties [2020-12-04T09:37:17,563][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.10.0", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10 on 11.0.8+10 +indy +jit [linux-x86_64]"} [2020-12-04T09:37:17,659][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"} [2020-12-04T09:37:17,712][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"} [2020-12-04T09:37:18,748][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"8b873949-cf90-491a-b76a-e3e7caa7f593", :path=>"/usr/share/logstash/data/uuid"} [2020-12-04T09:37:20,050][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set `xpack.monitoring.enabled: true` in logstash.yml [2020-12-04T09:37:20,060][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version. Please configure Metricbeat to monitor Logstash. Documentation can be found at: https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html [2020-12-04T09:37:21,056][WARN ][deprecation.logstash.outputs.elasticsearch] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode. [2020-12-04T09:37:22,082][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}} [2020-12-04T09:37:22,572][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"} [2020-12-04T09:37:22,705][WARN ][logstash.licensechecker.licensereader] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch {:url=>http://elasticsearch:9200/, :error_message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"} [2020-12-04T09:37:22,791][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch"} [2020-12-04T09:37:22,921][ERROR][logstash.monitoring.internalpipelinesource] Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach a live Elasticsearch cluster. [2020-12-04T09:37:25,467][INFO ][org.reflections.Reflections] Reflections took 262 ms to scan 1 urls, producing 23 keys and 47 values [2020-12-04T09:37:26,613][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>125, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x726c3bee run>"} [2020-12-04T09:37:28,365][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>1.74} [2020-12-04T09:37:28,464][INFO ][logstash.inputs.beats ][main] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"} [2020-12-04T09:37:28,525][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"} [2020-12-04T09:37:28,819][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} [2020-12-04T09:37:29,036][INFO ][org.logstash.beats.Server][main][0710cad67e8f47667bc7612580d5b91f691dd8262a4187d9eca8cf87229d04aa] Starting server on port: 5044 [2020-12-04T09:37:29,732][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600} [2020-12-04T09:37:52,828][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"} [2020-12-04T09:37:53,109][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"} But I don't want to connect to elasticsearch now. Just test get data.
How to get password for Kibana (ECK) APM operator in kubernetes?
ive followed this guide https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-eck.html, then apply this manifest: --- apiVersion: elasticsearch.k8s.elastic.co/v1 kind: Elasticsearch metadata: name: elasticsearch spec: version: 7.5.1 nodeSets: - name: default count: 3 config: node.master: true node.data: true node.ingest: true node.store.allow_mmap: false --- apiVersion: kibana.k8s.elastic.co/v1 kind: Kibana metadata: name: kibana spec: version: 7.5.1 count: 1 elasticsearchRef: name: elasticsearch --- apiVersion: apm.k8s.elastic.co/v1 kind: ApmServer metadata: name: apm-server spec: version: 7.5.1 count: 1 elasticsearchRef: name: "elasticsearch" config: apm-server: rum.enabled: true ilm.enabled: true rum.event_rate.limit: 300 rum.event_rate.lru_size: 1000 rum.allow_origins: [''] rum.library_pattern: "node_modules|bower_components|~" rum.exclude_from_grouping: "^/webpack" rum.source_mapping.enabled: true rum.source_mapping.cache.expiration: 5m rum.source_mapping.index_pattern: "apm--sourcemap*" http: service: spec: type: LoadBalancer tls: selfSignedCertificate: disabled: true Then with port-forward kubectl port-forward pod/kibana-kb-5bb5bf69c9-5m5r5 5601 im trying to login to kibana but i cannot find any password for elastic search or kibana and look if the APM is working correctly... So, how do i get the password to access it? which secret is it ?
kubectl get secret $ELASTICSEARCH_NAME-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode; echo as described here did not work for you? Does that secret exist?
Elasticsearch high level rest client, connection reset error in Kubernetes
I am using a single node elasticsearch server and a Java application based on elasticsearch high level rest client. Both are running in a Kubernetes cluster. #Bean(destroyMethod = "close") public RestHighLevelClient client(){ RestHighLevelClient client = null; Logger.getLogger(getClass().getName()).info("Connecting to elasticsearch on host : " + host); client = new RestHighLevelClient(RestClient.builder(new HttpHost(host, port, "http"))); return client; } This is working fine until service kept idle for about 10 minutes. When trying to query elasticsearch server an exception is thrown form java service java.io.IOException: Connection reset at org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:948) ~[elasticsearch-rest-client-6.4.3.jar!/:7.2.0] at org.elasticsearch.client.RestClient.performRequest(RestClient.java:227) ~[elasticsearch-rest-client-6.4.3.jar!/:7.2.0] at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1448) ~[elasticsearch-rest-high-level-client-7.2.0.jar!/:7.2.0] at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1418) ~[elasticsearch-rest-high-level-client-7.2.0.jar!/:7.2.0] at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1388) ~[elasticsearch-rest-high-level-client-7.2.0.jar!/:7.2.0] at org.elasticsearch.client.RestHighLevelClient.search(RestHighLevelClient.java:930) ~[elasticsearch-rest-high-level-client-7.2.0.jar!/:7.2.0] When I send the requests three time to the service it will again works. But after about 10 minutes of idle time service will give the same exception. I have a docker-compose setup with same images but there is no issue like this. My elasticsearch deployment apiVersion: v1 kind: Service metadata: name: elasticsearch spec: type: NodePort ports: - name: client port: 9200 targetPort: 9200 - name: nodes port: 9300 targetPort: 9300 selector: app: elasticsearch --- apiVersion: apps/v1 kind: StatefulSet metadata: name: elasticsearch spec: serviceName: elasticsearch selector: matchLabels: app: elasticsearch template: metadata: labels: app: elasticsearch spec: nodeSelector: beta.kubernetes.io/os: linux containers: - image: docker.elastic.co/elasticsearch/elasticsearch:7.2.0 name: elasticsearch env: - name: cluster.name value: "docker-cluster" - name: 'ES_JAVA_OPTS' value: "-Xms512m -Xmx512m" - name: discovery.type value: "single-node" ports: - containerPort: 9200 - containerPort: 9300 name: mysql volumeMounts: - name: elasticsearch-persistent-storage mountPath: /usr/share/elasticsearch/data volumes: - name: elasticsearch-persistent-storage persistentVolumeClaim: claimName: elasticsearch-claim initContainers: - image: alpine:3.6 command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"] name: elasticsearch-init securityContext: privileged: true My Java Service apiVersion: v1 kind: Service metadata: name: search spec: ports: - port: 9099 targetPort: 9099 selector: app: search --- apiVersion: apps/v1 kind: Deployment metadata: name: search spec: selector: matchLabels: app: search strategy: type: Recreate replicas: 1 template: metadata: labels: app: search spec: nodeSelector: beta.kubernetes.io/os: linux containers: - image: search-service:0.0.1-SNAPSHOT name: search env: - name: ELASTIC_SEARCH_HOST value: elasticsearch - name: ELASTIC_SEARCH_PORT value: "9200" - name: ELASTIC_SEARCH_CLUSTER value: docker-cluster ports: - containerPort: 9099