Micrometer ElasticSearch convert JSON name to Key - elasticsearch

I have a micrometer + spring actuator in my application that shows in this way in kibana:
"name": "jvm_threads_daemon",
"type": "gauge",
"cluster": "cluster_app",
"kubernetes_namespace": "test",
"kubernetes_pod_name": "fargate-ip-10-121-31-148.ec2.internal",
"value": 18
But I would like to convert to this:
"jvm_threads_daemon": "18",
"type": "gauge",
"cluster": "cluster_app",
"kubernetes_namespace": "test",
"kubernetes_pod_name": "fargate-ip-10-121-31-148.ec2.internal",
"value": 18
So, note that I need the metric as a tag name and not inside "name". This is possible in micrometer?
See my application.yml:
management:
endpoints:
web:
exposure:
include: "*"
base-path: /actuator
path-mapping.health: health
endpoint:
health:
probes:
enabled: true
show-details: always
metrics:
export:
elastic:
host: "https://elastico.url:443"
index: "metricbeat-k8s-apps"
user-name: "elastic"
password: "xxxx"
step: 30s
connect-timeout: 10s
tags:
cluster: "cluster_app"
kubernetes.pod.name: ${NODE_NAME}
kubernetes.namespace: ${POD_NAMESPACE}

You can extend/implement your own ElasticMeterRegistry and define your own format.
One thing to consider though: if you do the change you want, how will you able to query/filter/aggregate metrics in Kibana?

Related

creating dynamic index from kafka-filebeat

software version: ES-OSS-7.4.2, filebeat-OSS-7.4.2
following is my filebeat.yml and grok pipeline
filebeat.inputs:
- type: kafka
hosts:
- test-bigdata-kafka0003:9092
- test-bigdata-kafka0002:9092
- test-bigdata-kafka0001:9092
topics: ["bigdata-k8s-test-serverlog"]
group_id: "filebeat-kafka-test"
setup.template.settings:
index.number_of_shards: 1
_source.enabled: true
setup.template.name: "test"
setup.template.pattern: "test-*"
setup.template.overwrite: true
setup.template.enabled: true
setup.ilm.enable: true
setup.ilm.rollover_alias: "test"
setup.kibana:
host: "https://xxx:8080"
username: "superuser"
password: "123456"
ssl.verification_mode: none
output.elasticsearch:
index: "test-%{[jiserver]}-%{+yyyy.MM.dd}"
pipeline: "test-pipeline"
hosts: ["xxx:8200"]
username: "superuser"
password: "123456"
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
pipeline.json
{
"description": "Test pipeline",
"processors": [
{
"grok": {
"field": "message",
"patterns": ["%{CUSTOMTIME:timestamp} (?:%{NOTSPACE:jiserver}|-) (?:%{NOTSPACE:hostname}|-) (?:%{LOGLEVEL:level}|-) (?:%{NOTSPACE:thread}|-) (?:%{NOTSPACE:class}|-) (?:%{NOTSPACE:method}|-) (?:%{NOTSPACE:line}|-) (?:%{CUSTOMDATA:message}|-)"],
"pattern_definitions": {
"CUSTOMTIME": "%{YEAR}[- ]%{MONTHNUM}[- ]%{MONTHDAY}[- ]%{TIME}",
"CUSTOMDATA": "((%{GREEDYDATA})[[:space:]]?)+"
}
}
}
],
"on_failure": [
{
"set": {
"field": "error_information",
"value": "Processor {{ _ingest.on_failure_processor_type }} with tag {{ _ingest.on_failure_processor_tag }} in pipeline {{ _ingest.on_failure_pipeline }} failed with message {{ _ingest.on_failure_message }}"
}
}
]
}
I use grok split the message to different field, one of them is jiserver . And I want my index dynamicly name with jiserver, how to do . above setting is not work, and recevie error
[elasticsearch] elasticsearch/client.go:541 Bulk item insert failed (i=0, status=500): {"type":"string_index_out_of_bounds_exception","reason":"String index out of range: 0"}
I found a solution 。 filebeat.yml add a script processor
processors:
- script:
lang: javascript
id: my_filter
source: >
function process(event) {
var message = event.Get("message");
var name = message.split(" ")
event.Put("jiserver", name[2])
}

Beats can’t reach Elastic Service

I've been running my ECK (Elastic Cloud on Kubernetes) cluster for a couple of weeks with no issues. However, 3 days ago filebeat stopped being able to connect to my ES service. All pods are up and running (Elastic, Beats and Kibana).
Also, shelling into filebeats pods and connecting to the Elasticsearch service works just fine:
curl -k -u "user:$PASSWORD" https://quickstart-es-http.quickstart.svc:9200
{
"name" : "aegis-es-default-4",
"cluster_name" : "quickstart",
"cluster_uuid" : "",
"version" : {
"number" : "7.14.0",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "",
"build_date" : "",
"build_snapshot" : false,
"lucene_version" : "8.9.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
Yet the filebeats pod logs are producing the below error:
ERROR
[publisher_pipeline_output] pipeline/output.go:154
Failed to connect to backoff(elasticsearch(https://quickstart-es-http.quickstart.svc:9200)):
Connection marked as failed because the onConnect callback failed: could not connect to a compatible version of Elasticsearch:
503 Service Unavailable:
{
"error": {
"root_cause": [
{ "type": "master_not_discovered_exception", "reason": null }
],
"type": "master_not_discovered_exception",
"reason": null
},
"status": 503
}
I haven't made any changes so I think it's a case of authentication or SSL certificates needing updating?
My filebeats config looks like this:
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: quickstart
namespace: quickstart
spec:
type: filebeat
version: 7.14.0
elasticsearchRef:
name: quickstart
config:
filebeat:
modules:
- module: gcp
audit:
enabled: true
var.project_id: project_id
var.topic: topic_name
var.subcription: sub_name
var.credentials_file: /usr/certs/credentials_file
var.keep_original_message: false
vpcflow:
enabled: true
var.project_id: project_id
var.topic: topic_name
var.subscription_name: sub_name
var.credentials_file: /usr/certs/credentials_file
firewall:
enabled: true
var.project_id: project_id
var.topic: topic_name
var.subscription_name: sub_name
var.credentials_file: /usr/certs/credentials_file
daemonSet:
podTemplate:
spec:
serviceAccountName: filebeat
automountServiceAccountToken: true
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
securityContext:
runAsUser: 0
containers:
- name: filebeat
volumeMounts:
- name: varlogcontainers
mountPath: /var/log/containers
- name: varlogpods
mountPath: /var/log/pods
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
- name: credentials
mountPath: /usr/certs
readOnly: true
volumes:
- name: varlogcontainers
hostPath:
path: /var/log/containers
- name: varlogpods
hostPath:
path: /var/log/pods
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: credentials
secret:
defaultMode: 420
items:
secretName: elastic-service-account
And it was working just fine - haven't made any changes to this config to make it lose access.
Did a little more digging and found that there weren't enough resources to be able to assign a master node.
Got this when I tried to run GET /_cat/master and it returned the same 503 no master error. I added a new node pool and it started running normally.

YAML lists and variables

I am trying to deregister EC2 instances from target groups using Automation document in SSM, which I am attempting to write in YAML but I am having major issues with getting my head around YAML lists and arrays.
Here are the relevant parts of the code:
parameters:
DeregisterInstanceId:
type: StringList
description: (Required) Identifies EC2 instances for patching
default: ["i-xxx","i-yyy"]
Further down I am trying to read this DeregisterInstanceId as a list but it's not working - getting various errors regarding expected one type of variable but received another.
name: RemoveLiveInstancesFromTG
action: aws:executeAwsApi
inputs:
Service: elbv2
Api: DeregisterTargets
TargetGroupArn: "{{ TargetGroup }}"
Targets: "{{ DeregisterInstanceId }}"
isEnd: true
What Targets input really needs to look like, is like this:
Targets:
- Id: "i-xxx"
- Id: "i-yyy"
...but I am not sure how to pass my StringList to create the above.
I tried:
Targets:
- Id: "{{ DeregisterInstanceId }}"
and
Targets:
Id: "{{ DeregisterInstanceId }}"
But no go.
I used to have the exact same problem although I created the document in json.
Please checkout the following working script to de-register an instance from a load balancer target group
Automation document v. 74
{
"description": "LoadBalancer deregister targets",
"schemaVersion": "0.3",
"assumeRole": "{{ AutomationAssumeRole }}",
"parameters": {
"TargetGroupArn": {
"type": "String",
"description": "(Required) TargetGroup of LoadBalancer"
},
"Target": {
"type": "String",
"description": "(Required) EC2 Instance(s) to deregister"
},
"AutomationAssumeRole": {
"type": "String",
"description": "(Optional) The ARN of the role that allows Automation to perform the actions on your behalf.",
"default": ""
}
},
"mainSteps": [
{
"name": "DeregisterTarget",
"action": "aws:executeAwsApi",
"inputs": {
"Service": "elbv2",
"Api": "DeregisterTargets",
"TargetGroupArn": "{{ TargetGroupArn }}",
"Targets": [
{
"Id": "{{ Target }}"
}
]
}
}
]
}
Obviously the point of interest is the targets parameter, it needs an json array to work (forget about the cli format, it seems to need json).
It also allows for specifying multiple targets and also allows usage of ports and availability groups, but all I need it for is to choose one instance and pull it out.
Hope it might be of use for someone.

Spring Cloud Config: NoSuchLabelException

I have a simple Spring Config Server application which consumes the configuration data from a GIT repository. This Config Server works perfectly as expected in my local and development environment. Once deployed to the production server though, I kept seeing this error: org.springframework.cloud.config.server.environment.NoSuchLabelException: No such label: master
Here is the whole JSON return:
{
"status": "DOWN",
"configServer": {
"status": "DOWN",
"repository": {
"application": "app",
"profiles": "default"
},
"error": "org.springframework.cloud.config.server.environment.NoSuchLabelException: No such label: master"
},
"discoveryComposite": {
"description": "Discovery Client not initialized",
"status": "UNKNOWN",
"discoveryClient": {
"description": "Discovery Client not initialized",
"status": "UNKNOWN"
}
},
"diskSpace": {
"status": "UP",
"total": 10434699264,
"free": 6599856128,
"threshold": 10485760
},
"refreshScope": {
"status": "UP"
},
"hystrix": {
"status": "UP"
}
}
So I traced it down to the spring-cloud-config GitHub repo, and saw that it is being thrown here: https://github.com/spring-cloud/spring-cloud-config/blob/b7afa2bb641913b89e32ae258bd6ec442080b9e6/spring-cloud-config-server/src/main/java/org/springframework/cloud/config/server/environment/JGitEnvironmentRepository.java#185 This error is thrown by GitCommand class's call() method on line 235 when a Git branch is not found. But I can't for the life of me understand why!!! I have double-checked and verified that the "master" branch does indeed exist in the GIT repository for configuration properties.
The application properties for the Config Server are defined in a bootstrap.yml file, as follows:
server:
port: 8080
management:
context-path: /admin
endpoints:
enabled: false
health:
enabled: true
logging:
level:
com.netflix.discovery: 'OFF'
org.springframework.cloud: 'DEBUG'
eureka:
instance:
leaseRenewalIntervalInSeconds: 10
statusPageUrlPath: /admin/info
healthCheckUrlPath: /admin/health
spring:
application:
name: config-service
cloud:
config:
enabled: false
failFast: true
server:
git:
uri: 'https://some-spring-config-repo.git'
username: 'fakeuser'
password: 'fakepass'
basedir: "${catalina.home}/target/config"`
Any help would be most appreciated!!
Posting this in case someone else finds this uselful
I'm using the below setting in the properties file of the config server to indicate the "main" branch
spring.cloud.config.server.git.default-label=main
Additional info - my git repo is not public , I'm using the new personal access token based authentication instead of the older username/password authentication

Spring Cloud Consul Deregister Failing

I am using Spring Boot / Cloud / Consul (1.0.0.M2 and have tried current code as of 10/6/2015). I'm trying to register/deregister a service that uses a dynamic port and a dynamic id.
I have the following bootstrap:
spring:
cloud:
consul:
config:
enabled: true
host: localhost
port: 8500
And application.yml
spring:
main:
show-banner: false
application:
name: helloService
cloud:
consul:
config:
prefix: config
defaultContext: helloService
discovery:
instanceId: ${spring.application.name}:${spring.application.instance.id:${random.value}}
healthCheckPath: /${spring.application.name}/health
healthCheckInterval: 15s
endpoints:
shutdown:
enabled: true
And in the Key Values under config/application/server.port = 0 for a dynamic port.
The service is registered correctly during startup:
{
"consul": {
"ID": "consul",
"Service": "consul",
"Tags": [],
"Address": "",
"Port": 8300
},
"helloService-6596692c4e8af31ddd1589b0d359899f": {
"ID": "helloService-6596692c4e8af31ddd1589b0d359899f",
"Service": "helloService",
"Tags": [],
"Address": "",
"Port": 50307
} }
After issuing the shutdown:
curl http://localhost:50307/shutdown -X POST
{"message":"Shutting down, bye..."}
The service is still registered and the health check starts failing.
{
"consul": {
"ID": "consul",
"Service": "consul",
"Tags": [],
"Address": "",
"Port": 8300
},
"helloService-6596692c4e8af31ddd1589b0d359899f": {
"ID": "helloService-6596692c4e8af31ddd1589b0d359899f",
"Service": "helloService",
"Tags": [],
"Address": "",
"Port": 50307
} }
What is missing?

Resources