I'm using Spring Boot and would like to have this code:
LOG.info("Scheduled appointment for user 12345 [appointment ID 100]");
Produce the following log message in JSON GELF format:
{
"version": "1.1",
"host": "hostname.ec2.internal",
"short_message": "Scheduled appointment for user 12345 [appointment ID 100]",
"timestamp": 1318280136,
"level": 1,
"_user": "user#acme.com",
"_clientip": "127.0.0.1",
"_env": "prod",
"_app":"scheduler"
}
Do I need to create my own logger for this or can I customize Logback/Log4j2 to behave this way?
From a Log4j 2.x perspective, you can use the JSON Layout Template, which has a built-in eventTemplate for GELF.
Your appender configuration in the log4j2-spring.xml file would look like:
<Console name="CONSOLE">
<JsonTemplateLayout eventTemplateUri="classpath:GelfLayout.json" />
</Console>
Remark: Since Spring Boot uses Logback as default logging system, you'll have to exclude the spring-boot-starter-logging and replace it with spring-boot-starter-log4j2.
Moreover the JSON Layout Template requires an additional dependency:
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-layout-template-json</artifactId>
</dependency>
Related
When accessing my Spring Actuator /info endpoint I receive the following information:
{
"git": {
"branch": "8743b52063cd84097a65d1633f5c74f5",
"commit": {
"id": "b3j2924",
"time": "05.07.2021 # 10:00:00 UTC"
}
},
"build": {
"encoding": {
"source": "UTF-8"
},
"version": "1.0",
"artifact": "my-artifact",
"name": "my-app",
"time": 0451570333.122000000,
"group": "my.group"
}
}
My project does not maintain a META-INF/build-info.properties file.
I now wanted to write a unit-test for that exact output but get the following error:
java.lang.AssertionError:
Expecting actual:
"{"git":{"branch":"8743b52063cd84097a65d1633f5c74f5","commit":{"id":"b3j2924","time":"05.07.2021 # 10:00:00 UTC"}}}"
to contain:
"build"
The whole build block is missing in the output.
My questions are the following:
What needs to be done to access the build information during a local unit-test run without providing a META-INF/build-info.properties.
From where does Spring Actuator retrieve the actual build information when my project does not has a META-INF/build-info.properties file so it gives me the output from above?
The build-info.properties file is typically generated at build time by Spring Boot's Maven or Gradle plugins.
MSSQL value : column name=prop --> value= 100 and column name=role --> value= [{"role":"actor"},{"role":"director"}]
NOTE: the column:role is saved in json format.
read from kafka topic :
{
"schema":{
"type":"struct",
"fields":[
{
"type":"int32",
"optional":false,
"field":"prop"
},
{
"type":"string",
"optional":true,
"field":"roles"
}
],
"optional":false
},
"payload":{ "prop":100, "roles":"[{"role":"actor"},{"role":"director"}]"}
failing with the reason :
Error was [{"type":"mapper_parsing_exception","reason":"object mapping for [roles] tried to parse field [roles] as object, but found a concrete value"}
Reason for failing is that the connector is not able to create schema as array for roles
The above input message is created by confluent JdbcSourceConnector and the sink connector used is confluent ElasticsearchSinkConnector
Configuration details :
sink config:
name=prop-test
connector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
connection.url=<elasticseach url>
tasks.max=1
topics=test_prop
type.name=prop
#transforms=InsertKey, ExtractId
transforms.InsertKey.type=org.apache.kafka.connect.transforms.ValueToKey
transforms.InsertKey.fields=prop
transforms.ExtractId.type=org.apache.kafka.connect.transforms.ExtractField$Key
transforms.ExtractId.field=prop
Source config:
name=test_prop_source
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1
connection.url=jdbc:sqlserver://*.*.*.*:1433;instance=databaseName=test;
connection.user=*****
connection.password=*****
query=EXEC <store proc>
mode=bulk
batch.max.rows=2000000
topic.prefix=test_prop
transforms=createKey,extractInt
transforms.createKey.type=org.apache.kafka.connect.transforms.ValueToKey
transforms.createKey.fields=prop
transforms.extractInt.type=org.apache.kafka.connect.transforms.ExtractField$Key
transforms.extractInt.field=prop
connect-standalone.properties :
bootstrap.servers=localhost:9092
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
need to understand how explicitly i can make the schema as ARRAY for roles and not a string.
The fact that the jdbc Source connector will always sees columnname as fieldname and columnvalue as value. This conversion from string to array is not possible with the existing jdbc source connector support, it has to be a custom transformation or custom plugin to enable it.
The best option available for getting the data transferred from the MSSQL and getting it inserted into the Elastic Search is by using logstash. It has rich filter plugins which can enable the data from MSSQL to flow in any desired format and to any desired JSON output environment ( logstash/kafka topic )
Flow : MSSQL --> logstash --> Kakfa Topic --> Kafka Elastic sink connector --> Elastic Search
So, I'm building a full cloud solution using kubernetes and spring boot.
My spring boot application is deployed to a container and logs directly on the console.
As containers are ephemerals I'd like to send logs also to a remote logstash server, so that they can be processed and sent to elastic.
Normally I would install a filebeat on the server hosting my application, and I could, but isn't there any builtin method allowing me to avoid writing my log on a file before sending it?
Currently I'm using log4j but I see no problem in switching to another logger as long it has a "logbackappender".
You can try to add logback.xml in resources folder :
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE configuration>
<configuration scan="true">
<include resource="org/springframework/boot/logging/logback/base.xml"/>
<appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<param name="Encoding" value="UTF-8"/>
<remoteHost>localhost</remoteHost>
<port>5000</port>
<encoder class="net.logstash.logback.encoder.LogstashEncoder">
<customFields>{"app_name":"YourApp", "app_port": "YourPort"}</customFields>
</encoder>
</appender>
<root level="INFO">
<appender-ref ref="logstash"/>
</root>
</configuration>
Then add logstash encoder dependency :
pom.xml
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>4.11</version>
</dependency>
logstash.conf
input {
udp {
port => "5000"
type => syslog
codec => json
}
tcp {
port => "5000"
type => syslog
codec => json_lines
}
http {
port => "5001"
codec => "json"
}
}
filter {
if [type] == "syslog" {
mutate {
add_field => { "instance_name" => "%{app_name}-%{host}:%{app_port}" }
}
}
}
output {
elasticsearch {
hosts => ["${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}"]
index => "logs-%{+YYYY.MM.dd}"
}
}
I've just created a full working example in my repository
Hope to be helpful for someone
I'm using Spring Boot health indicator from an actuator. So far example response looks like:
{
"status":"DOWN",
"details": {
"diskSpace": {
"status":"UP",
"details": {
"total":499963170816,
"free":250067189760,
"threshold":10485760
}
}
}
}
Because I need to make /actuator/health endpoint public, I need to hide details for health indicators, so I expect to get something like this:
{
"status":"DOWN",
"details": {
"diskSpace": {
"status":"UP"
}
}
}
For disk space it's not a big problem but e.g. for database I don't want to share exception message and details in case of it's outage. Also (as I mentioned at the beginning) it must be public so I don't want to make this endpoint 'when-authorized'. And at the end - it would be great if it's possible to do that without writing my own custom endpoint.
Is it possible at all?
This isn't possible at the time of writing (in Spring Boot 2.1 and earlier) without writing your own custom endpoint. I've opened an issue to consider it as an enhancement for a future version of Spring Boot.
There is a way to achieve this in Spring Boot 2.X
management:
health:
db:
enabled: false
diskspace:
enabled: false
mongo:
enabled: false
refresh:
enabled: false
More information can be found here https://docs.spring.io/spring-boot/docs/current/reference/html/production-ready-endpoints.html#_auto_configured_healthindicators
How can I customize the Spring log4j output into the Mongo datastore?
I was able to follow the Spring's example on how to use MongoLog4j. The logs are being persisted into mongodb but whatever is in my conversion pattern is not respected. My desire is to store the line number in the log message.
Here's my log4j property file
log4j.rootCategory=INFO, stdout
log4j.appender.stdout=org.springframework.data.mongodb.log4j.MongoLog4jAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d %p [%c] [%L] - <%m>%n
log4j.appender.stdout.host = localhost
log4j.appender.stdout.port = 27017
log4j.appender.stdout.database = prod
log4j.appender.stdout.collectionPattern = logs
log4j.appender.stdout.applicationId = horizon
log4j.appender.stdout.warnOrHigherWriteConcern = FSYNC_SAFE
log4j.category.org.springframework.batch=DEBUG
log4j.category.org.springframework.data.document.mongodb=DEBUG
log4j.category.org.springframework.transaction=INFO
Below is what is being stored in Mongo.
{ "_id" : ObjectId("4f720482788d6140dacb0270"), "applicationId" : "test", "na
me" : "com.service.MongoTest", "level" : "DEBUG", "timestamp
" : ISODate("2012-03-27T18:18:42.981Z"), "properties" : { "applicationId" : "test" }, "message" : "Debug TEST3" }
Looking at Spring's source code, it doesn't seem to be implemented. Instead I found another project that has line numbers and custom conversion patterns implemented. The project is
http://log4mongo.org/