Spring avro consumer message converter exception - spring-boot

I have a spring cloud application. Here are consumer application properties
cloud:
stream:
default:
producer:
useNativeEncoding: true
consumer:
useNativeEncoding: true
bindings:
inputtest:
destination: test
content-type: application/*+avro
outputtest:
destination: test
content-type: application/*+avro
kafka:
streams:
binder:
configuration:
application:
server: localhost:8082
binder:
producer-properties:
key.serializer: org.apache.kafka.common.serialization.StringSerializer
value.serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
schema.registry.url: http://localhost:8081
consumer-properties:
key.deserializer: org.apache.kafka.common.serialization.StringDeserializer
value.deserializer: io.confluent.kafka.serializers.KafkaAvroDeserializer
schema.registry.url: http://localhost:8081
specific.avro.reader: true
schema:
avro:
dynamicSchemaGenerationEnabled: true
And also StreamListener is also configured
public void consumeDetails(GenericRecord message) {
System.out.println(message);
}
Its able to get the Genericrecord But when i put any Java class at the place of generic record its throwing an exception
#StreamListener(CreateMessageSink.INPUT)
public void consumeDetails(**TestMessage** message) {
System.out.println(message);
}
And exception is that i am getting on receiving message is
org.springframework.messaging.converter.MessageConversionException: Cannot convert from [com.dataset.CreateMessage] to [com.notebook..TestMessge] for GenericMessage [payload={ "time": 1570614318582, "task": "Create", "userId": "-1", "status": "Success", "severity": "INFO", "details": {"notebookId": "1", "datasetId": "1"}}, headers={kafka_offset=59, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#745a009e, deliveryAttempt=3, kafka_timestampType=CREATE_TIME, kafka_receivedMessageKey=null, kafka_receivedPartitionId=0, kafka_receivedTopic=dataset-create-test, kafka_receivedTimestamp=1570614318587, contentType=application/*+avro}], failedMessage=GenericMessage [payload={"tenantId": "-1", "time": 1570614318582, "task": "CreateDataset", "userId": "-1", "status": "Success", "source": "dataset-svc", "severity": "INFO", "details": {"notebookId": "d02fd508-f6cc-4d2f-a713-4298bca4e216", "datasetId": "5d9dac2eb7420146d6a79d30"}}, headers={kafka_offset=59, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#745a009e, deliveryAttempt=3, kafka_timestampType=CREATE_TIME, kafka_receivedMessageKey=null, kafka_receivedPartitionId=0, kafka_receivedTopic=dataset-create-test, kafka_receivedTimestamp=1570614318587, contentType=application/*+avro}]
at org.springframework.cloud.stream.config.SmartPayloadArgumentResolver.resolveArgument(SmartPayloadArgumentResolver.java:126)
at org.springframework.messaging.handler.invocation.HandlerMethodArgumentResolverComposite.resolveArgument(HandlerMethodArgumentResolverComposite.java:117)

Related

Why does ES show an error log `readiness probe failed`?

I am deploying Elasticsearch cluster on AWS EKS. Below is the k8s spec yml file.
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: datasource
spec:
version: 7.14.0
nodeSets:
- name: node
count: 3
config:
node.store.allow_mmap: true
xpack.security.http.ssl.enabled: false
xpack.security.transport.ssl.enabled: false
xpack.security.enabled: false
podTemplate:
spec:
initContainers:
- name: sysctl
securityContext:
privileged: true
command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
containers:
- name: elasticsearch
readinessProbe:
exec:
command:
- bash
- -c
- /mnt/elastic-internal/scripts/readiness-probe-script.sh
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 12
successThreshold: 1
timeoutSeconds: 12
env:
- name: READINESS_PROBE_TIMEOUT
value: "30"
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: ebs-sc
resources:
requests:
storage: 1024Gi
After deploy, I see all three pods have error:
{"type": "server", "timestamp": "2021-10-05T05:19:37,041Z", "level": "INFO", "component": "o.e.c.m.MetadataMappingService", "cluster.name": "datasource", "node.name": "datasource-es-node-0", "message": "[.kibana/g5_90XpHSI-y-I7MJfBZhQ] update_mapping [_doc]", "cluster.uuid": "xJ00drroT_CbJPfzi8jSAg", "node.id": "qmtgUZHbR4aTWsYaoIEDEA" }
{"type": "server", "timestamp": "2021-10-05T05:19:37,622Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "datasource", "node.name": "datasource-es-node-0", "message": "Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana][0]]]).", "cluster.uuid": "xJ00drroT_CbJPfzi8jSAg", "node.id": "qmtgUZHbR4aTWsYaoIEDEA" }
{"timestamp": "2021-10-05T05:19:40+00:00", "message": "readiness probe failed", "curl_rc": "35"}
{"timestamp": "2021-10-05T05:19:45+00:00", "message": "readiness probe failed", "curl_rc": "35"}
{"timestamp": "2021-10-05T05:19:50+00:00", "message": "readiness probe failed", "curl_rc": "35"}
{"timestamp": "2021-10-05T05:19:55+00:00", "message": "readiness probe failed", "curl_rc": "35"}
{"timestamp": "2021-10-05T05:20:00+00:00", "message": "readiness probe failed", "curl_rc": "35"}
{"timestamp": "2021-10-05T05:20:05+00:00", "message": "readiness probe failed", "curl_rc": "35"}
{"timestamp": "2021-10-05T05:20:10+00:00", "message": "readiness probe failed", "curl_rc": "35"}
{"timestamp": "2021-10-05T05:20:15+00:00", "message": "readiness probe failed", "curl_rc": "35"}
From above log, it shows Cluster health status changed from [YELLOW] to [GREEN] first then comes to this error readiness probe failed. I wonder how I can solve this issue. Is it Elasticsearch related error or k8s related?
You can by declaring READINESS_PROBE_TIMEOUT in your spec like this.
...
env:
- name: READINESS_PROBE_TIMEOUT
value: "30"
You can customize the readiness probe if necessary, the latest elasticsearch.k8s.elastic.co/v1 API spec is here, it's the same K8s PodTemplateSpec that you can use in your Elasticsearch spec.
Update: curl error code 35 refers to SSL error. Here's a post regarding the script. Can you remove the following settings from your spec and re-run:
xpack.security.http.ssl.enabled: false
xpack.security.transport.ssl.enabled: false
xpack.security.enabled: false

Micrometer ElasticSearch convert JSON name to Key

I have a micrometer + spring actuator in my application that shows in this way in kibana:
"name": "jvm_threads_daemon",
"type": "gauge",
"cluster": "cluster_app",
"kubernetes_namespace": "test",
"kubernetes_pod_name": "fargate-ip-10-121-31-148.ec2.internal",
"value": 18
But I would like to convert to this:
"jvm_threads_daemon": "18",
"type": "gauge",
"cluster": "cluster_app",
"kubernetes_namespace": "test",
"kubernetes_pod_name": "fargate-ip-10-121-31-148.ec2.internal",
"value": 18
So, note that I need the metric as a tag name and not inside "name". This is possible in micrometer?
See my application.yml:
management:
endpoints:
web:
exposure:
include: "*"
base-path: /actuator
path-mapping.health: health
endpoint:
health:
probes:
enabled: true
show-details: always
metrics:
export:
elastic:
host: "https://elastico.url:443"
index: "metricbeat-k8s-apps"
user-name: "elastic"
password: "xxxx"
step: 30s
connect-timeout: 10s
tags:
cluster: "cluster_app"
kubernetes.pod.name: ${NODE_NAME}
kubernetes.namespace: ${POD_NAMESPACE}
You can extend/implement your own ElasticMeterRegistry and define your own format.
One thing to consider though: if you do the change you want, how will you able to query/filter/aggregate metrics in Kibana?

Spring Boot Admin Client showing only details when there is context path

I am running boot admin server with eureka discovery.
Admin Server:
plugins {
id 'org.springframework.boot' version '2.1.7.RELEASE'
id 'io.spring.dependency-management' version '1.0.8.RELEASE'
id 'java'
}
group = 'com.example'
version = '0.0.1-SNAPSHOT'
sourceCompatibility = '1.8'
repositories {
mavenCentral()
}
ext {
set('springCloudVersion', "Greenwich.SR2")
}
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-web'
implementation 'org.springframework.boot:spring-boot-starter-security'
implementation 'org.springframework.cloud:spring-cloud-starter-netflix-eureka-client'
implementation 'de.codecentric:spring-boot-admin-starter-server:2.1.6'
testImplementation 'org.springframework.boot:spring-boot-starter-test'
}
dependencyManagement {
imports {
mavenBom "org.springframework.cloud:spring-cloud-dependencies:${springCloudVersion}"
}
}
Admin Server Application Yml:
spring:
boot.admin.discovery.converter.management-context-path: /admin
application:
name: spring-boot-admin-sample-eureka
eureka: #<1>
instance:
leaseRenewalIntervalInSeconds: 10
health-check-url-path: /admin/health
metadata-map:
startup: ${random.int} #needed to trigger info and endpoint update after restart
management.context-path: ${management.endpoints.web.base-path}
info.path: ${management.endpoints.web.base-path}/info
client:
registryFetchIntervalSeconds: 5
serviceUrl:
defaultZone: ${EUREKA_SERVICE_URL:http://localhost:8761}/eureka/
management:
endpoints:
web:
exposure:
include: "*" #<2>
endpoint:
health:
show-details: ALWAYS
Client:
plugins {
id 'org.springframework.boot' version '2.1.7.RELEASE'
id 'io.spring.dependency-management' version '1.0.8.RELEASE'
id 'java'
}
group = 'com.example'
version = '0.0.1-SNAPSHOT'
sourceCompatibility = '1.8'
repositories {
mavenCentral()
}
ext {
set('springCloudVersion', "Greenwich.SR2")
}
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-actuator'
implementation 'org.springframework.boot:spring-boot-starter-security'
implementation 'org.springframework.boot:spring-boot-starter-web'
implementation 'org.springframework.cloud:spring-cloud-starter-netflix-eureka-client'
// https://mvnrepository.com/artifact/org.jolokia/jolokia-core
compile group: 'org.jolokia', name: 'jolokia-core', version: '1.6.2'
testImplementation 'org.springframework.boot:spring-boot-starter-test'
testImplementation 'org.springframework.security:spring-security-test'
}
dependencyManagement {
imports {
mavenBom "org.springframework.cloud:spring-cloud-dependencies:${springCloudVersion}"
}
}
Application Yml
server:
port: 8083
servlet:
context-path: /mypath
#management config
eureka:
instance:
leaseRenewalIntervalInSeconds: 10
health-check-url-path: /admin/health
statusPageUrlPath: /admin/info
metadata-map:
startup: ${random.int} #needed to trigger info and endpoint update after restart
management.context-path: ${management.endpoints.web.base-path}
info.path: ${management.endpoints.web.base-path}/info
client:
registryFetchIntervalSeconds: 5
serviceUrl:
defaultZone: ${EUREKA_SERVICE_URL:http://localhost:8761}/eureka/
management:
endpoint:
health:
show-details: ALWAYS
endpoints:
web:
base-path: /admin
exposure:
include: '*'
security.basic.enabled: false
info:
component: Processor2
build:
name: Processor2
description: Processor to Roll up PP
version: 1
eureka:
region: ${eureka.client.region}
zone: ${eureka.instance.metadataMap.zone}
us-east-1b: discovery1
us-east-1c: discovery2
us-east-1e: discovery3
dp:
username: admin
password: admin123
spring:
application.name: procerssor2
jmx:
enabled: true
boot:
admin:
client:
instance:
service-url: /mypath
health:
  config:
    enabled: false
Due to the context path the boot admin is just displaying the details. I verified the http://localhost:8080/applications. It looks like below.
{
"name": "PROCERSSOR2",
"buildVersion": null,
"status": "UP",
"statusTimestamp": "2019-08-28T18:32:19.854Z",
"instances": [
{
"id": "804f35b9b73d",
"version": 1,
"registration": {
"name": "PROCERSSOR2",
"managementUrl": "http://192.168.0.8:8083/admin",
"healthUrl": "http://192.168.0.8:8083/mypath/admin/health",
"serviceUrl": "http://192.168.0.8:8083/",
"source": "discovery",
"metadata": {
"management.context-path": "/admin",
"startup": "-518261604",
"management.port": "8083",
"info.path": "/admin/info"
}
},
"registered": true,
"statusInfo": {
"status": "UP",
"details": {
"hystrix": {
"status": "UP"
},
"diskSpace": {
"status": "UP",
"details": {
"total": 499963170816,
"free": 366424887296,
"threshold": 10485760
}
},
"refreshScope": {
"status": "UP"
},
"discoveryComposite": {
"status": "UP",
"details": {
"discoveryClient": {
"status": "UP",
"details": {
"services": [
"procerssor2",
"spring-boot-admin-sample-eureka"
]
}
},
"eureka": {
"description": "Remote status from Eureka server",
"status": "UP",
"details": {
"applications": {
"PROCERSSOR2": 1,
"SPRING-BOOT-ADMIN-SAMPLE-EUREKA": 1
}
}
}
}
}
}
},
"statusTimestamp": "2019-08-28T18:32:19.854Z",
"info": {},
"endpoints": [
{
"id": "health",
"url": "http://192.168.0.8:8083/mypath/admin/health"
}
],
"buildVersion": null,
"tags": {}
}
]
}
when I remove the context path. everything work good. Please help
I had the same problem. At my case I only point eureka.instance.metadata-map.management.context-path at ${service.servlet.context-path}/actuator
server:
port: ${service.auth-service.port}
servlet:
context-path: ${service.auth-service.context-path}
eureka.instance.metadata-map.management.context-path: ${service.servlet.context-path}/actuator
So I think according to your code it should be something like /admin/actuator

Spring Cloud Config: NoSuchLabelException

I have a simple Spring Config Server application which consumes the configuration data from a GIT repository. This Config Server works perfectly as expected in my local and development environment. Once deployed to the production server though, I kept seeing this error: org.springframework.cloud.config.server.environment.NoSuchLabelException: No such label: master
Here is the whole JSON return:
{
"status": "DOWN",
"configServer": {
"status": "DOWN",
"repository": {
"application": "app",
"profiles": "default"
},
"error": "org.springframework.cloud.config.server.environment.NoSuchLabelException: No such label: master"
},
"discoveryComposite": {
"description": "Discovery Client not initialized",
"status": "UNKNOWN",
"discoveryClient": {
"description": "Discovery Client not initialized",
"status": "UNKNOWN"
}
},
"diskSpace": {
"status": "UP",
"total": 10434699264,
"free": 6599856128,
"threshold": 10485760
},
"refreshScope": {
"status": "UP"
},
"hystrix": {
"status": "UP"
}
}
So I traced it down to the spring-cloud-config GitHub repo, and saw that it is being thrown here: https://github.com/spring-cloud/spring-cloud-config/blob/b7afa2bb641913b89e32ae258bd6ec442080b9e6/spring-cloud-config-server/src/main/java/org/springframework/cloud/config/server/environment/JGitEnvironmentRepository.java#185 This error is thrown by GitCommand class's call() method on line 235 when a Git branch is not found. But I can't for the life of me understand why!!! I have double-checked and verified that the "master" branch does indeed exist in the GIT repository for configuration properties.
The application properties for the Config Server are defined in a bootstrap.yml file, as follows:
server:
port: 8080
management:
context-path: /admin
endpoints:
enabled: false
health:
enabled: true
logging:
level:
com.netflix.discovery: 'OFF'
org.springframework.cloud: 'DEBUG'
eureka:
instance:
leaseRenewalIntervalInSeconds: 10
statusPageUrlPath: /admin/info
healthCheckUrlPath: /admin/health
spring:
application:
name: config-service
cloud:
config:
enabled: false
failFast: true
server:
git:
uri: 'https://some-spring-config-repo.git'
username: 'fakeuser'
password: 'fakepass'
basedir: "${catalina.home}/target/config"`
Any help would be most appreciated!!
Posting this in case someone else finds this uselful
I'm using the below setting in the properties file of the config server to indicate the "main" branch
spring.cloud.config.server.git.default-label=main
Additional info - my git repo is not public , I'm using the new personal access token based authentication instead of the older username/password authentication

Spring Cloud Consul Deregister Failing

I am using Spring Boot / Cloud / Consul (1.0.0.M2 and have tried current code as of 10/6/2015). I'm trying to register/deregister a service that uses a dynamic port and a dynamic id.
I have the following bootstrap:
spring:
cloud:
consul:
config:
enabled: true
host: localhost
port: 8500
And application.yml
spring:
main:
show-banner: false
application:
name: helloService
cloud:
consul:
config:
prefix: config
defaultContext: helloService
discovery:
instanceId: ${spring.application.name}:${spring.application.instance.id:${random.value}}
healthCheckPath: /${spring.application.name}/health
healthCheckInterval: 15s
endpoints:
shutdown:
enabled: true
And in the Key Values under config/application/server.port = 0 for a dynamic port.
The service is registered correctly during startup:
{
"consul": {
"ID": "consul",
"Service": "consul",
"Tags": [],
"Address": "",
"Port": 8300
},
"helloService-6596692c4e8af31ddd1589b0d359899f": {
"ID": "helloService-6596692c4e8af31ddd1589b0d359899f",
"Service": "helloService",
"Tags": [],
"Address": "",
"Port": 50307
} }
After issuing the shutdown:
curl http://localhost:50307/shutdown -X POST
{"message":"Shutting down, bye..."}
The service is still registered and the health check starts failing.
{
"consul": {
"ID": "consul",
"Service": "consul",
"Tags": [],
"Address": "",
"Port": 8300
},
"helloService-6596692c4e8af31ddd1589b0d359899f": {
"ID": "helloService-6596692c4e8af31ddd1589b0d359899f",
"Service": "helloService",
"Tags": [],
"Address": "",
"Port": 50307
} }
What is missing?

Resources