How to create new chat room with play framework websocket? - websocket

I tried the chat example with websocket in play framework 2.6.x. It works fine. Now for the real application, I need to create multiple chat rooms based on user requests. And users will be able to access different chatrooms with an id or something. I think it might related to create a new flow for each room. Related code is here:
private val (chatSink, chatSource) = {
val source = MergeHub.source[WSMessage]
.log("source")
.map { msg =>
try {
val json = Json.parse(msg)
inputSanitizer.sanText((json \ "msg").as[String])
} catch {
case e: Exception => println(">>" + msg)
"Malfunction client"
}
}
.recoverWithRetries(-1, { case _: Exception ⇒ Source.empty })
val sink = BroadcastHub.sink[WSMessage]
source.toMat(sink)(Keep.both).run()
}
private val userFlow: Flow[WSMessage, WSMessage, _] = {
Flow.fromSinkAndSource(chatSink, chatSource)
}
But I really don't know how to create new flow with id and access it later. Can anyone help me on this?

I finally figured it out. Post the solution here in case anyone has similar problems.
My solution is to use the AsyncCacheApi to store Flows in cache with keys. Generate a new Flow when necessary instead of creating just one Sink and Source:
val chatRoom = cache.get[Flow[WSMessage, WSMessage, _]](s"id=$id")
chatRoom.map{room=>
val flow = if(room.nonEmpty) room.get else createNewFlow
cache.set(s"id=$id", flow)
Right(flow)
}
def createNewFlow: Flow[WSMessage, WSMessage, _] = {
val (chatSink, chatSource) = {
val source = MergeHub.source[WSMessage]
.map { msg =>
try {
inputSanitizer.sanitize(msg)
} catch {
case e: Exception => println(">>" + msg)
"Malfunction client"
}
}
.recoverWithRetries(-1, { case _: Exception ⇒ Source.empty })
val sink = BroadcastHub.sink[WSMessage]
source.toMat(sink)(Keep.both).run()
}
Flow.fromSinkAndSource(chatSink, chatSource)
}

Related

Okio Throttler integration with OkHttp

My team is suffering from this issue with slack integration to upload files, so following the comments in that issue I would like to throttle the requests in our Kotlin implementation.
I am trying to integrate Okio Throttler within an OkHttp interceptor, so I have the setup:
val client = OkHttpClient.Builder()
.retryOnConnectionFailure(false)
.addInterceptor { chain ->
val request = chain.request()
val originalRequestBody = request.body
val newRequest = if (originalRequestBody != null) {
val wrappedRequestBody = ThrottledRequestBody(originalRequestBody)
request.newBuilder()
.method(request.method, wrappedRequestBody)
.build()
} else {
request
}
chain.proceed(newRequest)
}
.build()
class ThrottledRequestBody(private val delegate: RequestBody) : RequestBody() {
private val throttler = Throttler().apply {
bytesPerSecond(1024, 1024 * 4, 1024 * 8)
}
override fun contentType(): MediaType? {
return delegate.contentType()
}
override fun writeTo(sink: BufferedSink) {
delegate.writeTo(throttler.sink(sink).buffer())
}
}
It seems throttler.sink returns a Sink, but a BufferedSink is required to the method delegate.writeTo, so I called buffer() to get that BufferedSink.
Am I doing it wrong ? Is the call for .buffer() breaking the integration?
It's almost perfect. You just need to flush the buffer when you're done otherwise it'll finish with a few bytes inside.
override fun writeTo(sink: BufferedSink) {
throttler.sink(sink).buffer().use {
delegate.writeTo(it)
}
}

Capturing ElasticsearchSink Exceptions in Flink

I've recently been encountering some issues that I've noticed in the logs
of my Flink job that handles writing to an Elasticsearch index. I was
hoping to leverage some of the metrics that Flink exposes (or piggyback on
them) to update metric counters when I encounter specific kinds of errors.
val builder = ElasticsearchSink.Builder(...)
builder.setFailureHandler { actionRequest, throwable, _, _ ->
// Log error here (and update metrics via metricGroup.counter(...)
}
return builder.build()
Currently, I don't have any "context" when the callback for the setFailureHandler occurs, and while I can log it, ideally I'd like to expose a metric to track how frequently this is occurring:
builder.setFailureHandler ( actionRequest, throwable, _, _ ->
elasticExceptionsCounter.inc()
}
One additional wrinkle here is that my specific scenario relies on dynamically creating and handling these sinks via a router like the following:
class DynamicElasticsearchSink<ElementT, RouteT, SinkT : ElasticsearchSinkBase<ElementT, out AutoCloseable>>(
private val sinkRouter: ElasticsearchSinkRouter<ElementT, RouteT, SinkT>
) : RichSinkFunction<ElementT>(), CheckpointedFunction {
// Store a reference to all of the current routes
private val sinkRoutes: MutableMap<RouteT, SinkT> = ConcurrentHashMap()
private lateinit var configuration: Configuration
override fun open(parameters: Configuration) {
configuration = parameters
}
override fun invoke(value: ElementT, context: SinkFunction.Context) {
val route = sinkRouter.getRoute(value)
var sink = sinkRoutes[route]
if (sink == null) {
// Build a new sink for this key and cache it for later use based on incoming records
sink = sinkRouter.createSink(route, value)
sink.runtimeContext = runtimeContext
sink.open(configuration)
sinkRoutes[route] = sink
}
sink.invoke(value, context)
}
// Omitted for brevity
}
and the sinkRouter.createSink() looks like the following:
override fun createSink(cacheKey: String, element: JsonObject): ElasticsearchSink<JsonObject> {
return buildSinkFromRoute(element)
}
private fun buildSinkFromRoute(element: JsonObject): ElasticsearchSink<JsonObject> {
val builder = ElasticsearchSink.Builder(
buildHostsFromElement(element),
ElasticsearchRoutingFunction()
)
// Various configuration omitted for brevity
builder.setFailureHandler { actionRequest, throwable, _, _ ->
// Here's where I'd like to capture the failures and record them as metrics
}
return builder.build()
}
Is there a way to support this currently or what options are available for handing this?

Springboot Kickstart GraphQL - Metrics for number of requests per resolver

I am currently trying in SpringBoot GraphQL kickstart to track the number of times each resolver method is called. To be more specific, I want to exactly how many times the methods of my GraphQLResolver<T> are called. This would have two utilities:
Track if the deprecated resolvers are still used
Know which fields are the most used, in order to optimize the database queries for those
To do so, I implemented a really weird and not-so-clean way using schema directive wiring.
#Component
class ResolverUsageCountInstrumentation(
private val meterRegistry: MeterRegistry
) : SchemaDirectiveWiring {
private val callsRecordingMap = ConcurrentHashMap<String, Int>()
override fun onField(environment: SchemaDirectiveWiringEnvironment<GraphQLFieldDefinition>): GraphQLFieldDefinition {
val fieldContainer = environment.fieldsContainer
val fieldDefinition = environment.fieldDefinition
val currentDF = environment.codeRegistry.getDataFetcher(fieldContainer, fieldDefinition)
if (currentDF.javaClass.name != "graphql.kickstart.tools.resolver.MethodFieldResolverDataFetcher") {
return fieldDefinition
}
val signature = getMethodSignature(unwrappedDF)
callsRecordingMap[signature] = 0
val newDF = DataFetcherFactories.wrapDataFetcher(currentDF) { dfe: DataFetchingEnvironment, value: Any? ->
callsRecordingMap.computeIfPresent(signature) { _, current: Int -> current + 1 }
value
}
environment.codeRegistry.dataFetcher(fieldContainer, fieldDefinition, newDF)
return fieldDefinition
}
private fun getMethodSignature(currentDF: DataFetcher<*>): String {
val method = getFieldVal(currentDF, "resolverMethod", true) as Method // nonapi.io.github.classgraph.utils.ReflectionUtils
return "${method.declaringClass.name}#${method.name}"
}
}
This technique does the work, but has the big disadvantage of not working if the data fetcher is wrapped. Along with that, it's not really clean at all. I'm wondering, would there be a better way to do this?
Thank you!

In Spring Webflux how to go from an `OutputStream` to a `Flux<DataBuffer>`?

I'm building a tarball dynamically, and would like to stream it back directly, which should be 100% possible with a .tar.gz.
The below code is the closest thing I could get to a dataBuffer, through lots of googling. Basically, I need something that implements an OutputStream and provides, or publishes, to a Flux<DataBuffer> so that I can return that from my method, and have streaming output, instead of buffering the entire tarball in ram (which I'm pretty sure is what is happening here). I'm using apache Compress-commons, which has a wonderful API, but it's all OutputStream based.
I suppose another way to do it would be to directly write to the response, but I don't think that would be properly reactive? Not sure how to get an OutputStream out of some sort of Response object either.
This is kotlin btw, on Spring Boot 2.0
#GetMapping("/cookbook.tar.gz", "/cookbook")
fun getCookbook(): Mono<DefaultDataBuffer> {
log.info("Creating tarball of cookbooks: ${soloConfig.cookbookPaths}")
val transformation = Mono.just(soloConfig.cookbookPaths.stream()
.toList()
.flatMap {
Files.walk(Paths.get(it)).map(Path::toFile).toList()
})
.map { files ->
//Will make one giant databuffer... but oh well? TODO: maybe use some kind of chunking.
val buffer = DefaultDataBufferFactory().allocateBuffer()
val outputBufferStream = buffer.asOutputStream()
//Transform my list of stuff into an archiveOutputStream
TarArchiveOutputStream(GzipCompressorOutputStream(outputBufferStream)).use { taos ->
taos.setLongFileMode(TarArchiveOutputStream.LONGFILE_GNU)
log.info("files to compress: ${files}")
for (file in files) {
if (file.isFile) {
val entry = "cookbooks/" + file.name
log.info("Adding ${entry} to tarball")
taos.putArchiveEntry(TarArchiveEntry(file, entry))
FileInputStream(file).use { fis ->
fis.copyTo(taos) //Copy that stuff!
}
taos.closeArchiveEntry()
}
}
}
buffer
}
return transformation
}
I puzzled through this, and have an effective solution. You implement an OutputStream and take those bytes and publish them into a stream. Be sure to override close, and send an onComplete. Works great!
#RestController
class SoloController(
val soloConfig: SoloConfig
) {
val log = KotlinLogging.logger { }
#GetMapping("/cookbooks.tar.gz", "/cookbooks")
fun streamCookbook(serverHttpResponse: ServerHttpResponse): Flux<DataBuffer> {
log.info("Creating tarball of cookbooks: ${soloConfig.cookbookPaths}")
val publishingOutputStream = PublishingOutputStream(serverHttpResponse.bufferFactory())
//Needs to set up cookbook path as a parent directory, and then do `cookbooks/$cookbook_path/<all files>` for each cookbook path given
Flux.just(soloConfig.cookbookPaths.stream().toList())
.doOnNext { paths ->
//Transform my list of stuff into an archiveOutputStream
TarArchiveOutputStream(GzipCompressorOutputStream(publishingOutputStream)).use { taos ->
taos.setLongFileMode(TarArchiveOutputStream.LONGFILE_GNU)
paths.forEach { cookbookDir ->
if (Paths.get(cookbookDir).toFile().isDirectory) {
val cookbookDirFile = Paths.get(cookbookDir).toFile()
val directoryName = cookbookDirFile.name
val entryStart = "cookbooks/${directoryName}"
val files = Files.walk(cookbookDirFile.toPath()).map(Path::toFile).toList()
log.info("${files.size} files to compress")
for (file in files) {
if (file.isFile) {
val relativePath = file.toRelativeString(cookbookDirFile)
val entry = "$entryStart/$relativePath"
taos.putArchiveEntry(TarArchiveEntry(file, entry))
FileInputStream(file).use { fis ->
fis.copyTo(taos) //Copy that stuff!
}
taos.closeArchiveEntry()
}
}
}
}
}
}
.subscribeOn(Schedulers.parallel())
.doOnComplete {
publishingOutputStream.close()
}
.subscribe()
return publishingOutputStream.publisher
}
class PublishingOutputStream(bufferFactory: DataBufferFactory) : OutputStream() {
val publisher: UnicastProcessor<DataBuffer> = UnicastProcessor.create(Queues.unbounded<DataBuffer>().get())
private val bufferPublisher: UnicastProcessor<Byte> = UnicastProcessor.create(Queues.unbounded<Byte>().get())
init {
bufferPublisher
.bufferTimeout(4096, Duration.ofMillis(100))
.doOnNext { intList ->
val buffer = bufferFactory.allocateBuffer(intList.size)
buffer.write(intList.toByteArray())
publisher.onNext(buffer)
}
.doOnComplete {
publisher.onComplete()
}
.subscribeOn(Schedulers.newSingle("publisherThread"))
.subscribe()
}
override fun write(b: Int) {
bufferPublisher.onNext(b.toByte())
}
override fun close() {
bufferPublisher.onComplete() //which should trigger the clean up of the whole thing
}
}
}

Events not firing? Using java socket.io client & netty-socketio on server

I know the client and server are connecting because my connect/disconnect events are firing. However, my custom events are not. I am using socket.io java client, and netty-socketio on the server. I usually use the socket.io javascript library which works seamlessly, so I am a bit lost as to why this is happening. I am writing this in Kotlin.
Client-Side
fun connectToServer(ipAddress : String)
{
socket = IO.socket("$ipAddress")
socket!!.on(Socket.EVENT_CONNECT) { obj ->
println("Connected To Server!!!")
}.on(EventNames.signOn) { obj ->
println(EventNames.signOn)
//cast value to string from server, hope for encrypted password
val encryptedPassword = obj[0] as String
when(encryptedPassword)
{
"no user" -> {
}
else -> {
val result = encryptedPassword!!.split("OR")
val isMatch = passwordTextField.text == dataProcessing.Encryption3().decryptValue("decrypt", result[0],result[1])
if(isMatch)
{
}
}
}
println("Encrypted Password: "+encryptedPassword)
}
// socket!!.on(Socket.EVENT_DISCONNECT, object : Emitter.Listener {
//
// override fun call(vararg args: Any) {}
//
// })
socket!!.connect()
// socket!!.open()
// socket!!.emit(Socket.EVENT_CONNECT, "Hello!")
socket!!.send("hey")
socket!!.emit(EventNames.requestClientSignOn, usernameTextField.text)
}
Server-Side
#Throws(InterruptedException::class, UnsupportedEncodingException::class)
fun server()
{
val config = Configuration()
config.setHostname("localhost")
config.setPort(PORT)
server = SocketIOServer(config)
server!!.addConnectListener {
println("Hello World!")
}
server!!.addEventListener(EventNames.requestClientSignOn, String::class.java) { client, data, ackRequest ->
println("Hello from requestClientSignOn..")
}
server!!.addDisconnectListener {
println("Client Disconnecting...")
}
server!!.addConnectListener {
println("client connected!! client: $it")
}
server!!.start()
You cannot use lambda expression in your event listeners, using netty-socketio on the sever.
Using the traditional EventListener solves this problem. I also converted the server to Kotlin, as it was easier to use the demo project as a reference.
server.addEventListener(EventNames.requestClientSignOn, String.class, new DataListener<String>() {
#Override
public void onData(SocketIOClient client, String username, AckRequest ackRequest) {
String isEncryptedPassword = new KOTS_EmployeeManager().getKOTS_User(KOTS_EmployeeManager.kotsUserType.CLIENT, username)
if(isEncryptedPassword != null)
{
//send back ack with encrypted password
ackRequest.sendAckData(isEncryptedPassword);
}else{
//send back ack with no user string
ackRequest.sendAckData("no user");
}
}
});

Resources