Spring boot stackdriver logging is textPayload and not jsonPayload - spring-boot

I have a log filter that logs out essential request information for debugging and log analytics. But as you can see, the text payload is really hard to read.
I don't want to have to copy + paste this text payload into a text editor every single time. Is there a way to make stack driver print this in a collapsable json instead?
More info:
- GKE pod
#Component
class LogFilter : WebFilter {
private val logger = LoggerFactory.getLogger(LogFilter::class.java)
override fun filter(exchange: ServerWebExchange, chain: WebFilterChain): Mono<Void> {
return chain
.filter(exchange)
.doAfterTerminate {
val request = exchange.request
val path = request.uri.path
val routesToExclude = listOf("actuator")
var isExcludedRoute = false
for (r in routesToExclude) { if (path.contains(r)) { isExcludedRoute = true; break; } }
if (!isExcludedRoute) {
val startTime = System.currentTimeMillis()
val statusCode = exchange.response.statusCode?.value()
val requestTime = System.currentTimeMillis() - startTime
val msg = "Served $path as $statusCode in $requestTime msec"
val requestPrintMap = mutableMapOf<Any, Any>()
requestPrintMap["method"] = if (request.method != null) {
request.method.toString()
} else "UNKNOWN"
requestPrintMap["path"] = path.toString()
requestPrintMap["query_params"] = request.queryParams
requestPrintMap["headers"] = request.headers
requestPrintMap["status_code"] = statusCode.toString()
requestPrintMap["request_time"] = requestTime
requestPrintMap["msg"] = msg
logger.info(JSONObject(requestPrintMap).toString())
}
}
}
}

What you will need to do is customize Fluentd in GKE. Pretty much it's creating a Fluend daemonset for logging instead of the default logging method.
Once that is done, you can setup structured logging to send jsonPayload logs to Stackdriver Logging.

The default Stackdriver logging agent configuration for Kubernetes will detect single-line JSON and convert it to jsonPayload. You can configure Spring to log as single-line JSON (e.g., via JsonLayout1) and let the logging agent pick up the JSON object (see https://cloud.google.com/logging/docs/agent/configuration#process-payload).
1Some of the JSON field names are different (e.g., JsonLayout uses "level" for the log level, while the Stackdriver logging agent recognizes "severity"), so you may have to override addCustomDataToJsonMap to fully control the resulting log entries.
See also GKE & Stackdriver: Java logback logging format?

Related

consume/return a JSON response when executing flow using Spring boot API

I'm a beginner in corda and I'm trying to execute flows using Spring boot API. When I used:
#PostMapping(value = [ "create-iou" ], produces = [ TEXT_PLAIN_VALUE ] , headers = [ "Content-Type=application/x-www-form-urlencoded" ])
my flow is getting executed (by testing it using insomnia). But When I changed it to
#PostMapping(value = [ "create-iou" ], produces = [ APPLICATION_JSON_VALUE ], headers = [ "Content-Type=application/json" ])
It gives me a 406 not acceptable error: No body returned for response.
Here's the API I've created/copied:
#PostMapping(value = [ "create-iou" ], produces = [ TEXT_PLAIN_VALUE ] , headers = [ "Content-Type=application/x-www-form-urlencoded" ])
fun createIOU(request: HttpServletRequest): ResponseEntity<String> {
val iouValue = request.getParameter("iouValue").toInt()
val partyName = request.getParameter("partyName")
?: return ResponseEntity.badRequest().body("Query parameter 'partyName' must not be null.\n")
if (iouValue <= 0 ) {
return ResponseEntity.badRequest().body("Query parameter 'iouValue' must be non-negative.\n")
}
val partyX500Name = CordaX500Name.parse(partyName)
val otherParty = proxy.wellKnownPartyFromX500Name(partyX500Name) ?: return ResponseEntity.badRequest().body("Party named $partyName cannot be found.\n")
return try {
val signedTx = proxy.startTrackedFlow(::Initiator, iouValue, otherParty).returnValue.getOrThrow()
ResponseEntity.status(HttpStatus.CREATED).body("Transaction id ${signedTx.id} committed to ledger.\n")
} catch (ex: Throwable) {
logger.error(ex.message, ex)
ResponseEntity.badRequest().body(ex.message!!)
}
}
I would like to return something like this:
{
iouValue: 99,
lender: PartyA,
borrower: PartyB
}
When executing the flow using http endpoint.
You need to use the RPC connection libraries provided by Corda:
import net.corda.client.rpc.CordaRPCClient
import net.corda.client.rpc.CordaRPCConnection
Take a look to this example to see how to use them.
You are not showing how your proxy is instantiate, but you need to instantiate a proxy to connect via RPC to the node, like so:
val rpcAddress = NetworkHostAndPort(host, rpcPort)
val rpcClient = CordaRPCClient(rpcAddress)
val rpcConnection = rpcClient.start(username, password)
proxy = rpcConnection.proxy
and once you have the proxy, you can create SpringBoot APIs to call that proxy that makes the RPC calls:
#RestController
#RequestMapping("/")
class StandardController(rpc: NodeRPCConnection) {
private val proxy = rpc.proxy
#GetMapping(value = ["/addresses"], produces = arrayOf("text/plain"))
private fun addresses() = proxy.nodeInfo().addresses.toString()
#GetMapping(value = ["/identities"], produces = arrayOf("text/plain"))
private fun identities() = proxy.nodeInfo().legalIdentities.toString()

Flink is not adding any data to Elasticsearch but no errors

Folks, I'm new to all this data streaming process but I was able to build and submit a Flink job that will read some CSV data from Kafka and aggregate it then put it in Elasticsearch.
I was able to do the first two parts, and print out my aggregation to STDOUT. But when I added the code to put it to Elasticsearch, it seems nothing is happening there (no data being added). I looked at the Flink job manager log and it looks fine (no errors) and says:
2020-03-03 16:18:03,877 INFO
org.apache.flink.streaming.connectors.elasticsearch7.Elasticsearch7ApiCallBridge
- Created Elasticsearch RestHighLevelClient connected to [http://elasticsearch-elasticsearch-coordinating-only.default.svc.cluster.local:9200]
Here is my code at this point:
/*
* This Scala source file was generated by the Gradle 'init' task.
*/
package flinkNamePull
import java.time.LocalDateTime
import java.util.Properties
import org.apache.flink.api.common.serialization.SimpleStringSchema
import org.apache.flink.streaming.api.scala._
import org.apache.flink.streaming.connectors.kafka.{FlinkKafkaConsumer010, FlinkKafkaProducer010}
import org.apache.flink.api.common.functions.RichMapFunction
import org.apache.flink.configuration.Configuration
import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment
import org.apache.flink.table.api.{DataTypes, Table}
import org.apache.flink.table.api.scala.StreamTableEnvironment
import org.apache.flink.table.descriptors.{Elasticsearch, Json, Schema}
object Demo {
/**
* MapFunction to generate Transfers POJOs from parsed CSV data.
*/
class TransfersMapper extends RichMapFunction[String, Transfers] {
private var formatter = null
#throws[Exception]
override def open(parameters: Configuration): Unit = {
super.open(parameters)
//formatter = DateTimeFormat.forPattern("yyyy-MM-dd HH:mm:ss")
}
#throws[Exception]
override def map(csvLine: String): Transfers = {
//var splitCsv = csvLine.stripLineEnd.split("\n")(1).split(",")
var splitCsv = csvLine.stripLineEnd.split(",")
val arrLength = splitCsv.length
val i = 0
if (arrLength != 13) {
for (i <- arrLength + 1 to 13) {
if (i == 13) {
splitCsv = splitCsv :+ "0.0"
} else {
splitCsv = splitCsv :+ ""
}
}
}
var trans = new Transfers()
trans.rowId = splitCsv(0)
trans.subjectId = splitCsv(1)
trans.hadmId = splitCsv(2)
trans.icuStayId = splitCsv(3)
trans.dbSource = splitCsv(4)
trans.eventType = splitCsv(5)
trans.prev_careUnit = splitCsv(6)
trans.curr_careUnit = splitCsv(7)
trans.prev_wardId = splitCsv(8)
trans.curr_wardId = splitCsv(9)
trans.inTime = splitCsv(10)
trans.outTime = splitCsv(11)
trans.los = splitCsv(12).toDouble
return trans
}
}
def main(args: Array[String]) {
// Create streaming execution environment
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setParallelism(1)
// Set properties per KafkaConsumer API
val properties = new Properties()
properties.setProperty("bootstrap.servers", "kafka.kafka:9092")
properties.setProperty("group.id", "test")
// Add Kafka source to environment
val myKConsumer = new FlinkKafkaConsumer010[String]("raw.data3", new SimpleStringSchema(), properties)
// Read from beginning of topic
myKConsumer.setStartFromEarliest()
val streamSource = env
.addSource(myKConsumer)
// Transform CSV (with a header row per Kafka event into a Transfers object
val streamTransfers = streamSource.map(new TransfersMapper())
// create a TableEnvironment
val tEnv = StreamTableEnvironment.create(env)
println("***** NEW EXECUTION STARTED AT " + LocalDateTime.now() + " *****")
// register a Table
val tblTransfers: Table = tEnv.fromDataStream(streamTransfers)
tEnv.createTemporaryView("transfers", tblTransfers)
tEnv.connect(
new Elasticsearch()
.version("7")
.host("elasticsearch-elasticsearch-coordinating-only.default.svc.cluster.local", 9200, "http") // required: one or more Elasticsearch hosts to connect to
.index("transfers-sum")
.documentType("_doc")
.keyNullLiteral("n/a")
)
.withFormat(new Json().jsonSchema("{type: 'object', properties: {curr_careUnit: {type: 'string'}, sum: {type: 'number'}}}"))
.withSchema(new Schema()
.field("curr_careUnit", DataTypes.STRING())
.field("sum", DataTypes.DOUBLE())
)
.inUpsertMode()
.createTemporaryTable("transfersSum")
val result = tEnv.sqlQuery(
"""
|SELECT curr_careUnit, sum(los)
|FROM transfers
|GROUP BY curr_careUnit
|""".stripMargin)
result.insertInto("transfersSum")
// Elasticsearch elasticsearch-elasticsearch-coordinating-only.default.svc.cluster.local:9200
env.execute("Flink Streaming Demo Dump to Elasticsearch")
}
}
I'm not sure how I can debug this beast... Wondering if somebody can help me figure out why the Flink job is not adding data to Elasticsearch :(
From my Flink cluster, I'm able to query Elasticsearch just fine (manually) and add records to my index:
curl -XPOST "http://elasticsearch-elasticsearch-coordinating-only.default.svc.cluster.local:9200/transfers-sum/_doc" -H 'Content-Type: application/json' -d'{"curr_careUnit":"TEST123","sum":"123"}'
A kind soul in the Flink mailist pointed out the fact that it could be Elasticsearch buffering my records... Well, it was. ;)
I have added the following options to the Elasticsearch connector:
.bulkFlushMaxActions(2)
.bulkFlushInterval(1000L)
Flink Elasticsearch Connector 7 using Scala
Please find a working and detailed answer which I have provided here.

WebTestClient with multipart file upload

I'm building a microservice using Spring Boot + Webflux, and I have an endpoint that accepts a multipart file upload. Which is working fine when I test with curl and Postman
#PostMapping("/upload", consumes = [MULTIPART_FORM_DATA_VALUE])
fun uploadVideo(#RequestPart("video") filePart: Mono<FilePart>): Mono<UploadResult> {
log.info("Video upload request received")
return videoFilePart.flatMap { video ->
val fileName = video.filename()
log.info("Saving video to tmp directory: $fileName")
val file = temporaryFilePath(fileName).toFile()
video.transferTo(file)
.thenReturn(UploadResult(true))
.doOnError { error ->
log.error("Failed to save video to temporary directory", error)
}
.onErrorMap {
VideoUploadException("Failed to save video to temporary directory")
}
}
}
I'm now trying to test using WebTestClient:
#Test
fun shouldSuccessfullyUploadVideo() {
client.post()
.uri("/video/upload")
.contentType(MULTIPART_FORM_DATA)
.syncBody(generateBody())
.exchange()
.expectStatus()
.is2xxSuccessful
}
private fun generateBody(): MultiValueMap<String, HttpEntity<*>> {
val builder = MultipartBodyBuilder()
builder.part("video", ClassPathResource("/videos/sunset.mp4"))
return builder.build()
}
The endpoint is returning a 500 because I haven't created the temp directory location to write the files to. However the test is passing even though I'm checking for is2xxSuccessful if I debug into the assertion that is2xxSuccessful performs, I can see it's failing because of the 500, however I'm still getting a green test
Not sure what I am doing wrong here. The VideoUploadException that I map to simply extends ResponseStatusException
class VideoUploadException(reason: String) : ResponseStatusException(HttpStatus.INTERNAL_SERVER_ERROR, reason)

Log Spring webflux types - Mono and Flux

I am new to spring 5.
1) How I can log the method params which are Mono and flux type without blocking them?
2) How to map Models at API layer to Business object at service layer using Map-struct?
Edit 1:
I have this imperative code which I am trying to convert into a reactive code. It has compilation issue at the moment due to introduction of Mono in the argument.
public Mono<UserContactsBO> getUserContacts(Mono<LoginBO> loginBOMono)
{
LOGGER.info("Get contact info for login: {}, and client: {}", loginId, clientId);
if (StringUtils.isAllEmpty(loginId, clientId)) {
LOGGER.error(ErrorCodes.LOGIN_ID_CLIENT_ID_NULL.getDescription());
throw new ServiceValidationException(
ErrorCodes.LOGIN_ID_CLIENT_ID_NULL.getErrorCode(),
ErrorCodes.LOGIN_ID_CLIENT_ID_NULL.getDescription());
}
if (!loginId.equals(clientId)) {
if (authorizationFeignClient.validateManagerClientAccess(new LoginDTO(loginId, clientId))) {
loginId = clientId;
} else {
LOGGER.error(ErrorCodes.LOGIN_ID_VALIDATION_ERROR.getDescription());
throw new AuthorizationException(
ErrorCodes.LOGIN_ID_VALIDATION_ERROR.getErrorCode(),
ErrorCodes.LOGIN_ID_VALIDATION_ERROR.getDescription());
}
}
UserContactDetailEntity userContactDetail = userContactRepository.findByLoginId(loginId);
LOGGER.debug("contact info returned from DB{}", userContactDetail);
//mapstruct to map entity to BO
return contactMapper.userEntityToUserContactBo(userContactDetail);
}
You can try like this.
If you want to add logs you may use .map and add logs there. if filters are not passed it will return empty you can get it with swichifempty
loginBOMono.filter(loginBO -> !StringUtils.isAllEmpty(loginId, clientId))
.filter(loginBOMono1 -> loginBOMono.loginId.equals(clientId))
.filter(loginBOMono1 -> authorizationFeignClient.validateManagerClientAccess(new LoginDTO(loginId, clientId)))
.map(loginBOMono1 -> {
loginBOMono1.loginId = clientId;
return loginBOMono1;
})
.flatMap(o -> {
return userContactRepository.findByLoginId(o.loginId);
})

IllegalStateException when trying to run spark streaming with twitter

I am new to spark and scala. I am trying to run an example given in google. I am encounting following exception when running this program.
Exception is:
17/05/25 11:13:42 ERROR ReceiverTracker: Deregistered receiver for stream 0: Restarting receiver with delay 2000ms: Error starting Twitter stream - java.lang.IllegalStateException: Authentication credentials are missing.
Code that I am executing is as follows:
PrintTweets.scala
package example
import org.apache.spark._
import org.apache.spark.SparkContext._
import org.apache.spark.streaming._
import org.apache.spark.streaming.twitter._
import org.apache.spark.streaming.StreamingContext._
import org.apache.log4j.Level
import Utilities._
object PrintTweets {
def main(args: Array[String]) {
// Configure Twitter credentials using twitter.txt
setupTwitter()
val appName = "TwitterData"
val conf = new SparkConf()
conf.setAppName(appName).setMaster("local[3]")
val ssc = new StreamingContext(conf, Seconds(5))
//val ssc = new StreamingContext("local[*]", "PrintTweets", Seconds(10))
setupLogging()
// Create a DStream from Twitter using our streaming context
val tweets = TwitterUtils.createStream(ssc, None)
// Now extract the text of each status update into RDD's using map()
val statuses = tweets.map(status => status.getText())
statuses.print()
ssc.start()
ssc.awaitTermination()
}
}
Utilities.scala
package example
import org.apache.log4j.Level
import java.util.regex.Pattern
import java.util.regex.Matcher
object Utilities {
/** Makes sure only ERROR messages get logged to avoid log spam. */
def setupLogging() = {
import org.apache.log4j.{Level, Logger}
val rootLogger = Logger.getRootLogger()
rootLogger.setLevel(Level.ERROR)
}
/** Configures Twitter service credentials using twiter.txt in the main workspace directory */
def setupTwitter() = {
import scala.io.Source
for (line <- Source.fromFile("../twitter.txt").getLines) {
val fields = line.split(" ")
if (fields.length == 2) {
System.setProperty("twitter4j.oauth." + fields(0), fields(1))
}
}
}
/** Retrieves a regex Pattern for parsing Apache access logs. */
def apacheLogPattern():Pattern = {
val ddd = "\\d{1,3}"
val ip = s"($ddd\\.$ddd\\.$ddd\\.$ddd)?"
val client = "(\\S+)"
val user = "(\\S+)"
val dateTime = "(\\[.+?\\])"
val request = "\"(.*?)\""
val status = "(\\d{3})"
val bytes = "(\\S+)"
val referer = "\"(.*?)\""
val agent = "\"(.*?)\""
val regex = s"$ip $client $user $dateTime $request $status $bytes $referer $agent"
Pattern.compile(regex)
}
}
When I check using print statments I find the exception is happening at line
val tweets = TwitterUtils.createStream(ssc, None)
I am giving credentials in twitter.txt file which is read properly by program. When I don't place twitter.txt in appropriate directory it shows explicit error, It shows explicit error unauthorized access when I give blank keys for customer key and secret etc in twitter.txt
If you need more details about error related information or versions of software let me know.
Thanks,
Madhu.
I could reproduce the issue with your code. I believe its your problem.
You might have not configured twitter.txt properly. Your twitter.txt file should be like this ->
consumerKey your_consumerKey
consumerSecret your_consumerSecret
accessToken your_accessToken
accessTokenSecret your_accessTokenSecret
I hope it helps.
After changing twitter.txt file syntax to following , single space between key and value it worked
consumerKey your_consumerKey
consumerSecret your_consumerSecret
accessToken your_accessToken
accessTokenSecret your_accessTokenSecret

Resources