Events not firing? Using java socket.io client & netty-socketio on server - socket.io

I know the client and server are connecting because my connect/disconnect events are firing. However, my custom events are not. I am using socket.io java client, and netty-socketio on the server. I usually use the socket.io javascript library which works seamlessly, so I am a bit lost as to why this is happening. I am writing this in Kotlin.
Client-Side
fun connectToServer(ipAddress : String)
{
socket = IO.socket("$ipAddress")
socket!!.on(Socket.EVENT_CONNECT) { obj ->
println("Connected To Server!!!")
}.on(EventNames.signOn) { obj ->
println(EventNames.signOn)
//cast value to string from server, hope for encrypted password
val encryptedPassword = obj[0] as String
when(encryptedPassword)
{
"no user" -> {
}
else -> {
val result = encryptedPassword!!.split("OR")
val isMatch = passwordTextField.text == dataProcessing.Encryption3().decryptValue("decrypt", result[0],result[1])
if(isMatch)
{
}
}
}
println("Encrypted Password: "+encryptedPassword)
}
// socket!!.on(Socket.EVENT_DISCONNECT, object : Emitter.Listener {
//
// override fun call(vararg args: Any) {}
//
// })
socket!!.connect()
// socket!!.open()
// socket!!.emit(Socket.EVENT_CONNECT, "Hello!")
socket!!.send("hey")
socket!!.emit(EventNames.requestClientSignOn, usernameTextField.text)
}
Server-Side
#Throws(InterruptedException::class, UnsupportedEncodingException::class)
fun server()
{
val config = Configuration()
config.setHostname("localhost")
config.setPort(PORT)
server = SocketIOServer(config)
server!!.addConnectListener {
println("Hello World!")
}
server!!.addEventListener(EventNames.requestClientSignOn, String::class.java) { client, data, ackRequest ->
println("Hello from requestClientSignOn..")
}
server!!.addDisconnectListener {
println("Client Disconnecting...")
}
server!!.addConnectListener {
println("client connected!! client: $it")
}
server!!.start()

You cannot use lambda expression in your event listeners, using netty-socketio on the sever.
Using the traditional EventListener solves this problem. I also converted the server to Kotlin, as it was easier to use the demo project as a reference.
server.addEventListener(EventNames.requestClientSignOn, String.class, new DataListener<String>() {
#Override
public void onData(SocketIOClient client, String username, AckRequest ackRequest) {
String isEncryptedPassword = new KOTS_EmployeeManager().getKOTS_User(KOTS_EmployeeManager.kotsUserType.CLIENT, username)
if(isEncryptedPassword != null)
{
//send back ack with encrypted password
ackRequest.sendAckData(isEncryptedPassword);
}else{
//send back ack with no user string
ackRequest.sendAckData("no user");
}
}
});

Related

How to receive infinite chunked data in okhttp

I have a Http server. When client send a http request to the server, the server side will hold the http connection and send chunked string infinite to the client. I know it will be better using websocket in today, but it is a old project, and I can't change the server side code.
// server.kt
package com.example.long_http
import io.vertx.core.AbstractVerticle
import io.vertx.core.Promise
import io.vertx.core.Vertx
class MainVerticle : AbstractVerticle() {
override fun start(startPromise: Promise<Void>) {
vertx
.createHttpServer()
.requestHandler { req ->
var i = 0
req.response().setChunked(true).putHeader("Content-Type", "text/plain")
val timer = vertx.setPeriodic(2000) {
req.response().write("hello ${System.currentTimeMillis()}")
println("write ${System.currentTimeMillis()}")
}
req.response().closeHandler {
vertx.cancelTimer(timer)
println("close")
}
}
.listen(8888) { http ->
if (http.succeeded()) {
startPromise.complete()
println("HTTP server started on port 8888")
} else {
startPromise.fail(http.cause());
}
}
}
}
fun main() {
Vertx.vertx().deployVerticle(MainVerticle())
}
I try to receive chunked string using okhttp, but it dont work.
// client.kt
package com.example.long_http
import okhttp3.*
import java.io.IOException
fun main() {
val client = OkHttpClient()
val request = Request.Builder().url("http://localhost:8888").build()
client.newCall(request).enqueue(handler())
}
class handler : Callback {
override fun onFailure(call: Call, e: IOException) {
e.printStackTrace()
}
override fun onResponse(call: Call, response: Response) {
println("onResponse")
val stream = response.body!!.byteStream().bufferedReader()
while (true) {
var line = stream.readLine()
println(line)
}
}
}

Okio Throttler integration with OkHttp

My team is suffering from this issue with slack integration to upload files, so following the comments in that issue I would like to throttle the requests in our Kotlin implementation.
I am trying to integrate Okio Throttler within an OkHttp interceptor, so I have the setup:
val client = OkHttpClient.Builder()
.retryOnConnectionFailure(false)
.addInterceptor { chain ->
val request = chain.request()
val originalRequestBody = request.body
val newRequest = if (originalRequestBody != null) {
val wrappedRequestBody = ThrottledRequestBody(originalRequestBody)
request.newBuilder()
.method(request.method, wrappedRequestBody)
.build()
} else {
request
}
chain.proceed(newRequest)
}
.build()
class ThrottledRequestBody(private val delegate: RequestBody) : RequestBody() {
private val throttler = Throttler().apply {
bytesPerSecond(1024, 1024 * 4, 1024 * 8)
}
override fun contentType(): MediaType? {
return delegate.contentType()
}
override fun writeTo(sink: BufferedSink) {
delegate.writeTo(throttler.sink(sink).buffer())
}
}
It seems throttler.sink returns a Sink, but a BufferedSink is required to the method delegate.writeTo, so I called buffer() to get that BufferedSink.
Am I doing it wrong ? Is the call for .buffer() breaking the integration?
It's almost perfect. You just need to flush the buffer when you're done otherwise it'll finish with a few bytes inside.
override fun writeTo(sink: BufferedSink) {
throttler.sink(sink).buffer().use {
delegate.writeTo(it)
}
}

Spring reactive WebSocketHandler mono does not reach doFinally block

I am trying to handle closing web socket session in WebSocketHandler. My intuition was to do it in this way:
webSocketClient.execute(
URI.create("some-ws-endpoint")
) { session: WebSocketSession ->
session.receive()
.doOnEach { action(it) }
.then()
.doFinally { session.close() }
}
but I cannot reach doFinally block from Mono<Void> returned by webSocketClient.execute. My full test code for this case is:
fun test() = runBlocking {
val webSocketClient: WebSocketClient = StandardWebSocketClient()
val subscription = webSocketClient.execute(
URI.create("some-ws-endpoint")
) { session: WebSocketSession ->
session.receive()
.doOnEach { println("Message: $it") }
.then()
.doFinally { println("finally") }
}.subscribe()
delay(20000)
subscription.dispose()
delay(5000)
}
from which I have Messages printed, but finally is never shown on my console. From the other hand when I tried to do it on plain reactor-core components, everything works just fine:
runBlocking {
val publisher: Flux<Long> = Flux.interval(Duration.ofSeconds(1))
val subscription = publisher
.doOnEach { println("Value: $it") }
.then()
.doFinally { println("in doFinally") }
.subscribe()
delay(5_000)
subscription.dispose()
delay(1_000)
}
I am new to both WebSockets and Project Reactor, so maybe I am doing some basic mistake. Does anyone see what is wrong with my code?

Handle Subscription in vertx GraphQL

I tried to use Vertx HttpClient/WebClient to consume the GraphQLSubscritpion but it did not work as expected.
The server-side related code(written with Vertx Web GraphQL) is like the following, when a comment is added, then trigger onNext to send the comment to the Publisher.
public VertxDataFetcher<UUID> addComment() {
return VertxDataFetcher.create((DataFetchingEnvironment dfe) -> {
var commentInputArg = dfe.getArgument("commentInput");
var jacksonMapper = DatabindCodec.mapper();
var input = jacksonMapper.convertValue(commentInputArg, CommentInput.class);
return this.posts.addComment(input)
.onSuccess(id -> this.posts.getCommentById(id.toString())
.onSuccess(c ->subject.onNext(c)));
});
}
private BehaviorSubject<Comment> subject = BehaviorSubject.create();
public DataFetcher<Publisher<Comment>> commentAdded() {
return (DataFetchingEnvironment dfe) -> {
ConnectableObservable<Comment> connectableObservable = subject.share().publish();
connectableObservable.connect();
return connectableObservable.toFlowable(BackpressureStrategy.BUFFER);
};
}
In the client, I mixed to use the HttpClient/WebClient, most of the time, I would like to use WebClient, which easier for handling form post. But it seems it does not work have a WebSocket connection.
So the websocket part is returning to use HttpClient.
var options = new HttpClientOptions()
.setDefaultHost("localhost")
.setDefaultPort(8080);
var httpClient = vertx.createHttpClient(options);
httpClient.webSocket("/graphql")
.onSuccess(ws -> {
ws.textMessageHandler(text -> log.info("web socket message handler:{}", text));
JsonObject messageInit = new JsonObject()
.put("type", "connection_init")
.put("id", "1");
JsonObject message = new JsonObject()
.put("payload", new JsonObject()
.put("query", "subscription onCommentAdded { commentAdded { id content } }"))
.put("type", "start")
.put("id", "1");
ws.write(messageInit.toBuffer());
ws.write(message.toBuffer());
})
.onFailure(e -> log.error("error: {}", e));
// this client here is WebClient.
client.post("/graphql")
.sendJson(Map.of(
"query", "mutation addComment($input:CommentInput!){ addComment(commentInput:$input) }",
"variables", Map.of(
"input", Map.of(
"postId", id,
"content", "comment content of post id" + LocalDateTime.now()
)
)
))
.onSuccess(
data -> log.info("data of addComment: {}", data.bodyAsString())
)
.onFailure(e -> log.error("error: {}", e));
When running the client and server, the comment is added, but the WebSocket client does not print any info about websocket message. On the server console, there is an message like this.
2021-06-25 18:45:44,356 DEBUG [vert.x-eventloop-thread-1] graphql.GraphQL: Execution '182965bb-80de-416d-b5fe-fe157ab87f1c' completed with zero errors
It seems the backend commentAdded datafetcher is not invoked at all.
The complete codes of GraphQL client and server are shared on my Github.
After reading some testing codes of Vertx Web GraphQL, I found I have to add the ConnectionInitHandler on ApolloWSHandler like this.
.connectionInitHandler(connectionInitEvent -> {
JsonObject payload = connectionInitEvent.message().content().getJsonObject("payload");
if (payload != null && payload.containsKey("rejectMessage")) {
connectionInitEvent.fail(payload.getString("rejectMessage"));
return;
}
connectionInitEvent.complete(payload);
}
)
When the client sends connection_init message, the connectionInitEvent.complete is required to start the communication between the client and the server.

In Spring Webflux how to go from an `OutputStream` to a `Flux<DataBuffer>`?

I'm building a tarball dynamically, and would like to stream it back directly, which should be 100% possible with a .tar.gz.
The below code is the closest thing I could get to a dataBuffer, through lots of googling. Basically, I need something that implements an OutputStream and provides, or publishes, to a Flux<DataBuffer> so that I can return that from my method, and have streaming output, instead of buffering the entire tarball in ram (which I'm pretty sure is what is happening here). I'm using apache Compress-commons, which has a wonderful API, but it's all OutputStream based.
I suppose another way to do it would be to directly write to the response, but I don't think that would be properly reactive? Not sure how to get an OutputStream out of some sort of Response object either.
This is kotlin btw, on Spring Boot 2.0
#GetMapping("/cookbook.tar.gz", "/cookbook")
fun getCookbook(): Mono<DefaultDataBuffer> {
log.info("Creating tarball of cookbooks: ${soloConfig.cookbookPaths}")
val transformation = Mono.just(soloConfig.cookbookPaths.stream()
.toList()
.flatMap {
Files.walk(Paths.get(it)).map(Path::toFile).toList()
})
.map { files ->
//Will make one giant databuffer... but oh well? TODO: maybe use some kind of chunking.
val buffer = DefaultDataBufferFactory().allocateBuffer()
val outputBufferStream = buffer.asOutputStream()
//Transform my list of stuff into an archiveOutputStream
TarArchiveOutputStream(GzipCompressorOutputStream(outputBufferStream)).use { taos ->
taos.setLongFileMode(TarArchiveOutputStream.LONGFILE_GNU)
log.info("files to compress: ${files}")
for (file in files) {
if (file.isFile) {
val entry = "cookbooks/" + file.name
log.info("Adding ${entry} to tarball")
taos.putArchiveEntry(TarArchiveEntry(file, entry))
FileInputStream(file).use { fis ->
fis.copyTo(taos) //Copy that stuff!
}
taos.closeArchiveEntry()
}
}
}
buffer
}
return transformation
}
I puzzled through this, and have an effective solution. You implement an OutputStream and take those bytes and publish them into a stream. Be sure to override close, and send an onComplete. Works great!
#RestController
class SoloController(
val soloConfig: SoloConfig
) {
val log = KotlinLogging.logger { }
#GetMapping("/cookbooks.tar.gz", "/cookbooks")
fun streamCookbook(serverHttpResponse: ServerHttpResponse): Flux<DataBuffer> {
log.info("Creating tarball of cookbooks: ${soloConfig.cookbookPaths}")
val publishingOutputStream = PublishingOutputStream(serverHttpResponse.bufferFactory())
//Needs to set up cookbook path as a parent directory, and then do `cookbooks/$cookbook_path/<all files>` for each cookbook path given
Flux.just(soloConfig.cookbookPaths.stream().toList())
.doOnNext { paths ->
//Transform my list of stuff into an archiveOutputStream
TarArchiveOutputStream(GzipCompressorOutputStream(publishingOutputStream)).use { taos ->
taos.setLongFileMode(TarArchiveOutputStream.LONGFILE_GNU)
paths.forEach { cookbookDir ->
if (Paths.get(cookbookDir).toFile().isDirectory) {
val cookbookDirFile = Paths.get(cookbookDir).toFile()
val directoryName = cookbookDirFile.name
val entryStart = "cookbooks/${directoryName}"
val files = Files.walk(cookbookDirFile.toPath()).map(Path::toFile).toList()
log.info("${files.size} files to compress")
for (file in files) {
if (file.isFile) {
val relativePath = file.toRelativeString(cookbookDirFile)
val entry = "$entryStart/$relativePath"
taos.putArchiveEntry(TarArchiveEntry(file, entry))
FileInputStream(file).use { fis ->
fis.copyTo(taos) //Copy that stuff!
}
taos.closeArchiveEntry()
}
}
}
}
}
}
.subscribeOn(Schedulers.parallel())
.doOnComplete {
publishingOutputStream.close()
}
.subscribe()
return publishingOutputStream.publisher
}
class PublishingOutputStream(bufferFactory: DataBufferFactory) : OutputStream() {
val publisher: UnicastProcessor<DataBuffer> = UnicastProcessor.create(Queues.unbounded<DataBuffer>().get())
private val bufferPublisher: UnicastProcessor<Byte> = UnicastProcessor.create(Queues.unbounded<Byte>().get())
init {
bufferPublisher
.bufferTimeout(4096, Duration.ofMillis(100))
.doOnNext { intList ->
val buffer = bufferFactory.allocateBuffer(intList.size)
buffer.write(intList.toByteArray())
publisher.onNext(buffer)
}
.doOnComplete {
publisher.onComplete()
}
.subscribeOn(Schedulers.newSingle("publisherThread"))
.subscribe()
}
override fun write(b: Int) {
bufferPublisher.onNext(b.toByte())
}
override fun close() {
bufferPublisher.onComplete() //which should trigger the clean up of the whole thing
}
}
}

Resources