I am trying to do MDC propagation with Kamon like shown in this documentation But it does not seem to work like they say
Play framework - 2.5
kamon-core - 0.6.2
kamon-play-25 - 0.6.2
My logback pattern:
<pattern>%d{HH:mm:ss.SSS} [%thread] [%level] [%traceToken]- %logger{36}\(%L\) %X{X-ApplicationId} - %message%n%xException</pattern>
I have created a filter:
class AccessLoggingFilter #Inject() (implicit val mat: Materializer, ec: ExecutionContext) extends Filter with LazyLogging {
val ApplicationIdKey = AvailableToMdc("X-ApplicationId")
def apply(next: (RequestHeader) => Future[Result])(request: RequestHeader): Future[Result] = {
TraceLocal.storeForMdc("X-ApplicationId", request.id.toString)
logger.error("first Location")
withMdc {
logger.error("Second location")
next(request)
}}}
And added it like so:
class MyFilters #Inject() (accessLoggingFilter: AccessLoggingFilter) extends DefaultHttpFilters(accessLoggingFilter)
Now when i do an http call to the server i get the following output:
c.v.i.utils.AccessLoggingFilter(24) - first Location
c.v.i.utils.AccessLoggingFilter(26) 1 - Second location
And all log prints afterwards do not show the '1' X-ApplicationId
Cant figure out what i am doing wrong.
Here a complete(almost) example:
build.sbt:
name := "kamon-play-example"
version := "1.0"
scalaVersion := "2.11.7"
val kamonVersion = "0.6.2"
val resolutionRepos = Seq("Kamon Repository Snapshots" at "http://snapshots.kamon.io")
val dependencies = Seq(
"io.kamon" %% "kamon-play-25" % kamonVersion,
"io.kamon" %% "kamon-log-reporter" % kamonVersion
)
lazy val root = (project in file(".")).enablePlugins(PlayScala)
.settings(resolvers ++= resolutionRepos)
.settings(libraryDependencies ++= dependencies)
basic filter:
class TraceLocalFilter #Inject() (implicit val mat: Materializer, ec: ExecutionContext) extends Filter {
val logger = Logger(this.getClass)
val TraceLocalStorageKey = "MyTraceLocalStorageKey"
val userAgentHeader = "User-Agent"
//this value will be available in the MDC at the moment to call to Logger.*()s
val UserAgentHeaderAvailableToMDC = AvailableToMdc(userAgentHeader)
override def apply(next: (RequestHeader) ⇒ Future[Result])(header: RequestHeader): Future[Result] = {
def onResult(result:Result) = {
val traceLocalContainer = TraceLocal.retrieve(TraceLocalKey).getOrElse(TraceLocalContainer("unknown","unknown"))
result.withHeaders(TraceLocalStorageKey -> traceLocalContainer.traceToken)
}
//update the TraceLocalStorage
TraceLocal.store(TraceLocalKey)(TraceLocalContainer(header.headers.get(TraceLocalStorageKey).getOrElse("unknown"), "unknown"))
TraceLocal.store(UserAgentHeaderAvailableToMDC)(header.headers.get(userAgentHeader).getOrElse("unknown"))
//call the action
next(header).map(onResult)
}
}
we need add the filter:
class Filters #Inject() (traceLocalFilter: TraceLocalFilter) extends HttpFilters {
val filters = Seq(traceLocalFilter)
}
a really simple controller and action:
class KamonPlayExample #Inject() (kamon: Kamon) extends Controller {
def sayHello = Action.async {
Future {
logger.info("Say hello to Kamon")
Ok("Say hello to Kamon")
}
}
}
in logback.xml add the following pattern:
<pattern>%date{HH:mm:ss.SSS} %-5level [%traceToken][%X{User-Agent}] [%thread] %logger{55} - %msg%n</pattern>
add the sbt-aspectj-runner plugin in order to run the application with Aspectjweaver in DEV mode:
addSbtPlugin("io.kamon" % "aspectj-play-runner" % "0.1.3")
run the application with aspectj-runner:run and make some curls:
curl -i -H 'X-Trace-Token:kamon-test' -H 'User-Agent:Super-User-Agent' -X GET "http://localhost:9000/helloKamon"
curl -i -H 'X-Trace-Token:kamon-test'-X GET "http://localhost:9000/helloKamon"
in the console:
15:09:16.027 INFO [kamon-test][Super-User-Agent] [application-akka.actor.default-dispatcher-8] controllers.KamonPlayExample - Say hello to Kamon
15:09:24.034 INFO [kamon-test][curl/7.47.1] [application-akka.actor.default-dispatcher-8] controllers.KamonPlayExample - Say hello to Kamon
hope you help.
Related
I'm a beginner in corda and I'm trying to execute flows using Spring boot API. When I used:
#PostMapping(value = [ "create-iou" ], produces = [ TEXT_PLAIN_VALUE ] , headers = [ "Content-Type=application/x-www-form-urlencoded" ])
my flow is getting executed (by testing it using insomnia). But When I changed it to
#PostMapping(value = [ "create-iou" ], produces = [ APPLICATION_JSON_VALUE ], headers = [ "Content-Type=application/json" ])
It gives me a 406 not acceptable error: No body returned for response.
Here's the API I've created/copied:
#PostMapping(value = [ "create-iou" ], produces = [ TEXT_PLAIN_VALUE ] , headers = [ "Content-Type=application/x-www-form-urlencoded" ])
fun createIOU(request: HttpServletRequest): ResponseEntity<String> {
val iouValue = request.getParameter("iouValue").toInt()
val partyName = request.getParameter("partyName")
?: return ResponseEntity.badRequest().body("Query parameter 'partyName' must not be null.\n")
if (iouValue <= 0 ) {
return ResponseEntity.badRequest().body("Query parameter 'iouValue' must be non-negative.\n")
}
val partyX500Name = CordaX500Name.parse(partyName)
val otherParty = proxy.wellKnownPartyFromX500Name(partyX500Name) ?: return ResponseEntity.badRequest().body("Party named $partyName cannot be found.\n")
return try {
val signedTx = proxy.startTrackedFlow(::Initiator, iouValue, otherParty).returnValue.getOrThrow()
ResponseEntity.status(HttpStatus.CREATED).body("Transaction id ${signedTx.id} committed to ledger.\n")
} catch (ex: Throwable) {
logger.error(ex.message, ex)
ResponseEntity.badRequest().body(ex.message!!)
}
}
I would like to return something like this:
{
iouValue: 99,
lender: PartyA,
borrower: PartyB
}
When executing the flow using http endpoint.
You need to use the RPC connection libraries provided by Corda:
import net.corda.client.rpc.CordaRPCClient
import net.corda.client.rpc.CordaRPCConnection
Take a look to this example to see how to use them.
You are not showing how your proxy is instantiate, but you need to instantiate a proxy to connect via RPC to the node, like so:
val rpcAddress = NetworkHostAndPort(host, rpcPort)
val rpcClient = CordaRPCClient(rpcAddress)
val rpcConnection = rpcClient.start(username, password)
proxy = rpcConnection.proxy
and once you have the proxy, you can create SpringBoot APIs to call that proxy that makes the RPC calls:
#RestController
#RequestMapping("/")
class StandardController(rpc: NodeRPCConnection) {
private val proxy = rpc.proxy
#GetMapping(value = ["/addresses"], produces = arrayOf("text/plain"))
private fun addresses() = proxy.nodeInfo().addresses.toString()
#GetMapping(value = ["/identities"], produces = arrayOf("text/plain"))
private fun identities() = proxy.nodeInfo().legalIdentities.toString()
Folks, I'm new to all this data streaming process but I was able to build and submit a Flink job that will read some CSV data from Kafka and aggregate it then put it in Elasticsearch.
I was able to do the first two parts, and print out my aggregation to STDOUT. But when I added the code to put it to Elasticsearch, it seems nothing is happening there (no data being added). I looked at the Flink job manager log and it looks fine (no errors) and says:
2020-03-03 16:18:03,877 INFO
org.apache.flink.streaming.connectors.elasticsearch7.Elasticsearch7ApiCallBridge
- Created Elasticsearch RestHighLevelClient connected to [http://elasticsearch-elasticsearch-coordinating-only.default.svc.cluster.local:9200]
Here is my code at this point:
/*
* This Scala source file was generated by the Gradle 'init' task.
*/
package flinkNamePull
import java.time.LocalDateTime
import java.util.Properties
import org.apache.flink.api.common.serialization.SimpleStringSchema
import org.apache.flink.streaming.api.scala._
import org.apache.flink.streaming.connectors.kafka.{FlinkKafkaConsumer010, FlinkKafkaProducer010}
import org.apache.flink.api.common.functions.RichMapFunction
import org.apache.flink.configuration.Configuration
import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment
import org.apache.flink.table.api.{DataTypes, Table}
import org.apache.flink.table.api.scala.StreamTableEnvironment
import org.apache.flink.table.descriptors.{Elasticsearch, Json, Schema}
object Demo {
/**
* MapFunction to generate Transfers POJOs from parsed CSV data.
*/
class TransfersMapper extends RichMapFunction[String, Transfers] {
private var formatter = null
#throws[Exception]
override def open(parameters: Configuration): Unit = {
super.open(parameters)
//formatter = DateTimeFormat.forPattern("yyyy-MM-dd HH:mm:ss")
}
#throws[Exception]
override def map(csvLine: String): Transfers = {
//var splitCsv = csvLine.stripLineEnd.split("\n")(1).split(",")
var splitCsv = csvLine.stripLineEnd.split(",")
val arrLength = splitCsv.length
val i = 0
if (arrLength != 13) {
for (i <- arrLength + 1 to 13) {
if (i == 13) {
splitCsv = splitCsv :+ "0.0"
} else {
splitCsv = splitCsv :+ ""
}
}
}
var trans = new Transfers()
trans.rowId = splitCsv(0)
trans.subjectId = splitCsv(1)
trans.hadmId = splitCsv(2)
trans.icuStayId = splitCsv(3)
trans.dbSource = splitCsv(4)
trans.eventType = splitCsv(5)
trans.prev_careUnit = splitCsv(6)
trans.curr_careUnit = splitCsv(7)
trans.prev_wardId = splitCsv(8)
trans.curr_wardId = splitCsv(9)
trans.inTime = splitCsv(10)
trans.outTime = splitCsv(11)
trans.los = splitCsv(12).toDouble
return trans
}
}
def main(args: Array[String]) {
// Create streaming execution environment
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setParallelism(1)
// Set properties per KafkaConsumer API
val properties = new Properties()
properties.setProperty("bootstrap.servers", "kafka.kafka:9092")
properties.setProperty("group.id", "test")
// Add Kafka source to environment
val myKConsumer = new FlinkKafkaConsumer010[String]("raw.data3", new SimpleStringSchema(), properties)
// Read from beginning of topic
myKConsumer.setStartFromEarliest()
val streamSource = env
.addSource(myKConsumer)
// Transform CSV (with a header row per Kafka event into a Transfers object
val streamTransfers = streamSource.map(new TransfersMapper())
// create a TableEnvironment
val tEnv = StreamTableEnvironment.create(env)
println("***** NEW EXECUTION STARTED AT " + LocalDateTime.now() + " *****")
// register a Table
val tblTransfers: Table = tEnv.fromDataStream(streamTransfers)
tEnv.createTemporaryView("transfers", tblTransfers)
tEnv.connect(
new Elasticsearch()
.version("7")
.host("elasticsearch-elasticsearch-coordinating-only.default.svc.cluster.local", 9200, "http") // required: one or more Elasticsearch hosts to connect to
.index("transfers-sum")
.documentType("_doc")
.keyNullLiteral("n/a")
)
.withFormat(new Json().jsonSchema("{type: 'object', properties: {curr_careUnit: {type: 'string'}, sum: {type: 'number'}}}"))
.withSchema(new Schema()
.field("curr_careUnit", DataTypes.STRING())
.field("sum", DataTypes.DOUBLE())
)
.inUpsertMode()
.createTemporaryTable("transfersSum")
val result = tEnv.sqlQuery(
"""
|SELECT curr_careUnit, sum(los)
|FROM transfers
|GROUP BY curr_careUnit
|""".stripMargin)
result.insertInto("transfersSum")
// Elasticsearch elasticsearch-elasticsearch-coordinating-only.default.svc.cluster.local:9200
env.execute("Flink Streaming Demo Dump to Elasticsearch")
}
}
I'm not sure how I can debug this beast... Wondering if somebody can help me figure out why the Flink job is not adding data to Elasticsearch :(
From my Flink cluster, I'm able to query Elasticsearch just fine (manually) and add records to my index:
curl -XPOST "http://elasticsearch-elasticsearch-coordinating-only.default.svc.cluster.local:9200/transfers-sum/_doc" -H 'Content-Type: application/json' -d'{"curr_careUnit":"TEST123","sum":"123"}'
A kind soul in the Flink mailist pointed out the fact that it could be Elasticsearch buffering my records... Well, it was. ;)
I have added the following options to the Elasticsearch connector:
.bulkFlushMaxActions(2)
.bulkFlushInterval(1000L)
Flink Elasticsearch Connector 7 using Scala
Please find a working and detailed answer which I have provided here.
In my Grails 4 app, log.info("log message") doesn't show log, but log.error("log message") does.
How do I change the log level from error to info in Grails 4?
Option 1
All I needed to do was update the application.yml file and added the following to the bottom
logging:
level:
root: INFO
You can also set a single the log level for a single package:
logging:
level:
packageName: INFO
Option 2
Since Grails 4 is based on Spring Boot, I ended up just setting the appropriate environment variable, i.e. logging.level.root=INFO or logging.level.com.mycompany.mypackage=INFO which I did in intellij by editing my run configuration (see below). This way, I can set the logging level differently when I deploy it.
You want to edit grails-app/conf/logback.groovy. Below is what the default file looks like for Grails 4.0.1.
import grails.util.BuildSettings
import grails.util.Environment
import org.springframework.boot.logging.logback.ColorConverter
import org.springframework.boot.logging.logback.WhitespaceThrowableProxyConverter
import java.nio.charset.StandardCharsets
conversionRule 'clr', ColorConverter
conversionRule 'wex', WhitespaceThrowableProxyConverter
// See http://logback.qos.ch/manual/groovy.html for details on configuration
appender('STDOUT', ConsoleAppender) {
encoder(PatternLayoutEncoder) {
charset = StandardCharsets.UTF_8
pattern =
'%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} ' + // Date
'%clr(%5p) ' + // Log level
'%clr(---){faint} %clr([%15.15t]){faint} ' + // Thread
'%clr(%-40.40logger{39}){cyan} %clr(:){faint} ' + // Logger
'%m%n%wex' // Message
}
}
def targetDir = BuildSettings.TARGET_DIR
if (Environment.isDevelopmentMode() && targetDir != null) {
appender("FULL_STACKTRACE", FileAppender) {
file = "${targetDir}/stacktrace.log"
append = true
encoder(PatternLayoutEncoder) {
charset = StandardCharsets.UTF_8
pattern = "%level %logger - %msg%n"
}
}
logger("StackTrace", ERROR, ['FULL_STACKTRACE'], false)
}
root(ERROR, ['STDOUT'])
The specific change depends on what you really want to do. For example, if you have a controller named demo.SomeController and you want to set its log level to INFO, you could add something like this:
logger 'demo.SomeController', INFO, ['STDOUT'], false
See http://logback.qos.ch/manual/groovy.html for the full config reference.
I hope that helps.
Simple Way:
Update/Replace your grails-app/conf/logback.groovy with following code:
import ch.qos.logback.classic.encoder.PatternLayoutEncoder
appender("FILE", RollingFileAppender) {
file = "logs/FILE-NAME.log"
rollingPolicy(TimeBasedRollingPolicy) {
fileNamePattern = "logs/FILE-NAME-%d{yyyy-MM-dd}.log"
maxHistory = 30
}
encoder(PatternLayoutEncoder) {
pattern = "%d{HH:mm:ss.SSS} %-4relative [%thread] %-5level %logger{35} - %msg%n"
}
}
root(INFO, ["FILE"])
Above solution shows logger level to INFO
You can refer this more details and all log levels.
Hope this will helps you.
I have a log filter that logs out essential request information for debugging and log analytics. But as you can see, the text payload is really hard to read.
I don't want to have to copy + paste this text payload into a text editor every single time. Is there a way to make stack driver print this in a collapsable json instead?
More info:
- GKE pod
#Component
class LogFilter : WebFilter {
private val logger = LoggerFactory.getLogger(LogFilter::class.java)
override fun filter(exchange: ServerWebExchange, chain: WebFilterChain): Mono<Void> {
return chain
.filter(exchange)
.doAfterTerminate {
val request = exchange.request
val path = request.uri.path
val routesToExclude = listOf("actuator")
var isExcludedRoute = false
for (r in routesToExclude) { if (path.contains(r)) { isExcludedRoute = true; break; } }
if (!isExcludedRoute) {
val startTime = System.currentTimeMillis()
val statusCode = exchange.response.statusCode?.value()
val requestTime = System.currentTimeMillis() - startTime
val msg = "Served $path as $statusCode in $requestTime msec"
val requestPrintMap = mutableMapOf<Any, Any>()
requestPrintMap["method"] = if (request.method != null) {
request.method.toString()
} else "UNKNOWN"
requestPrintMap["path"] = path.toString()
requestPrintMap["query_params"] = request.queryParams
requestPrintMap["headers"] = request.headers
requestPrintMap["status_code"] = statusCode.toString()
requestPrintMap["request_time"] = requestTime
requestPrintMap["msg"] = msg
logger.info(JSONObject(requestPrintMap).toString())
}
}
}
}
What you will need to do is customize Fluentd in GKE. Pretty much it's creating a Fluend daemonset for logging instead of the default logging method.
Once that is done, you can setup structured logging to send jsonPayload logs to Stackdriver Logging.
The default Stackdriver logging agent configuration for Kubernetes will detect single-line JSON and convert it to jsonPayload. You can configure Spring to log as single-line JSON (e.g., via JsonLayout1) and let the logging agent pick up the JSON object (see https://cloud.google.com/logging/docs/agent/configuration#process-payload).
1Some of the JSON field names are different (e.g., JsonLayout uses "level" for the log level, while the Stackdriver logging agent recognizes "severity"), so you may have to override addCustomDataToJsonMap to fully control the resulting log entries.
See also GKE & Stackdriver: Java logback logging format?
I work with swagger 2.0 to define a back end and trying to define security.
I end up with :
---
swagger: "2.0"
info:
version: 1.0.0
title: Foo test
schemes:
- https
paths:
/foo:
get:
security:
- Foo: []
responses:
200:
description: Ok
securityDefinitions:
Foo:
type: apiKey
name: X-BAR
in: header
Everything good till now, java codegen give me :
#ApiOperation(value = "", nickname = "fooGet", notes = "", authorizations = {
#Authorization(value = "Foo")
}, tags={ })
#ApiResponses(value = {
#ApiResponse(code = 200, message = "Ok") })
#RequestMapping(value = "/foo",
method = RequestMethod.GET)
default ResponseEntity<Void> fooGet() {
if(getObjectMapper().isPresent() && getAcceptHeader().isPresent()) {
} else {
log.warn("ObjectMapper or HttpServletRequest not configured in default FooApi interface so no example is generated");
}
return new ResponseEntity<>(HttpStatus.NOT_IMPLEMENTED);
}
I'm wondering, in the interface, how to retrieve "properly" the X-BAR header value.
I end up with :
#Autowired
private HttpServletRequest httpServletRequest;
httpServletRequest.getHeader("X-BAR")
which works , but is there a more proper way ?
Define a class "Foo" ? a doFilter ?
Thanks