I am working with Vert.x 2.x (http://vertx.io) which makes extensive use of asynchronous callbacks. These quickly become unwieldy with typical nesting/callback hell issues.
I have considered both Scala Futures/Promises (which I think would be the defacto approach) and also Reactive Extensions (RxScala).
From my testing I have found some interesting performance results.
My testing is pretty basic, I'm just issuing a bunch of HTTP requests (via weighttp) to a Vert.x verticle that makes an asynchronous call across the Vert.x eventbus, and processes a response that is then returned in an HTTP 200 response.
What I have found is the following (performance here is measured in terms of HTTP requests per second):
Async Callback performance = 68,305 rps
Rx performance = 64,656 rps
Future/Promises performance = 61,376 rps
The test conditions were:
Mac Pro OS X Yosemite 10.10.2
Oracle JVM 1.8U25
weighttp version 0.3
Vert.x 2.1.5
Scala 2.10.4
RxScala 0.23.0
4 x Web Service Verticle Instances
4 x Backend Service Verticle Instances
The test command was
weighttp -n 1000000 -c 128 -7 8 -k "localhost:8888"
The figures above are the average of five test runs less best and worst result. Note the results are very consistent around the presented average (no more than a few hundred rps deviation).
Is there any known reason why the above might be happening - i.e. Rx > Futures in pure requests per second?
Reactive Extensions in my opinion are superior as they can do so much more but given the standard approach to async callbacks typically seems to go down the Futures/Promises track I'm surprised at the performance hit.
EDIT: Here is the Web Service Verticle
class WebVerticle extends Verticle {
override def start() {
val port = container.env().getOrElse("HTTP_PORT", "8888").toInt
val approach = container.env().getOrElse("APPROACH", "ASYNC")
container.logger.info("Listening on port: " + port)
container.logger.info("Using approach: " + approach)
vertx.createHttpServer.requestHandler { req: HttpServerRequest =>
approach match {
case "ASYNC" => sendAsync(req, "hello")
case "FUTURES" => sendWithFuture("hello").onSuccess { case body => req.response.end(body) }
case "RX" => sendWithObservable("hello").doOnNext(req.response.end(_)).subscribe()
}
}.listen(port)
}
// Async callback
def sendAsync(req: HttpServerRequest, body: String): Unit = {
vertx.eventBus.send("service.verticle", body, { msg: Message[String] =>
req.response.end(msg.body())
})
}
// Rx
def sendWithObservable(body: String) : Observable[String] = {
val subject = ReplaySubject[String]()
vertx.eventBus.send("service.verticle", body, { msg: Message[String] =>
subject.onNext(msg.body())
subject.onCompleted()
})
subject
}
// Futures
def sendWithFuture(body: String) : Future[String] = {
val promise = Promise[String]()
vertx.eventBus.send("service.verticle", body, { msg: Message[String] =>
promise.success(msg.body())
})
promise.future
}
}
EDIT: Here is the Backend Verticle
class ServiceVerticle extends Verticle {
override def start(): Unit = {
vertx.eventBus.registerHandler("service.verticle", { msg: Message[String] =>
msg.reply("Hello Scala")
})
}
}
Related
I'm trying to do a performance test on a
SPA with a Frontend in React, deployed with Netlify
As a backend we're using Hasura Cloud Graphql (std version) https://hasura.io/, where everything from the client goes directly through Hasura to the DB.
DB is in Postgress housed in Heroku (Std 0 tier).
We're hoping to be able to have around 800 users simultaneous.
The problem is that i'm loss about how to do it or if i'm doing it correctly, seeing how most of our stuff are "subscriptions/mutations" that I had to transform into queries. I tried doing those test with k6 and Jmeter but i'm not sure if i'm doing them properly.
k6 test
At first, i did a quick search and collected around 10 subscriptions that are commonly used. Then i tried to create a performance test with k6 https://k6.io/docs/using-k6/http-requests/ but i wasn't able to create a working subscription test so i just transform each subscription into a query and perform a http.post with this setup:
export const options = {
stages: [
{ duration: '30s', target: 75 },
{ duration: '120s', target: 75 },
{ duration: '60s', target: 50 },
{ duration: '30s', target: 30 },
{ duration: '10s', target: 0 }
]
};
export default function () {
var res = http.post(prod,
JSON.stringify({
query: listaQueries.GetDesafiosCursosByKey(
keys.desafioCursoKey
)}), params);
sleep(1)
}
I did this for every query and ran each test individually. Unfortunately, the numbers i got were bad, and somehow our test environment was getting better times than production. (The only difference afaik is that we're using Hasura Cloud for production).
I tried to implement websocket, but i couldn't getthem work and configure them to do a stress/load test.
K6 result
Jmeter test
After that, i tried something similar with Jmeter, but again i couldn't figure how to set up a subscription test (after i while, i read in a blog that jmeter doesn't support it
https://qainsights.com/deep-dive-into-graphql-in-jmeter/ ) so i simply transformed all subscriptions into a query and tried to do the same, but the numbers I was getting were different and much higher than k6.
Jmeter query Config 1
Jmeter query config 2
Jmeter thread config
Questions
I'm not sure if i'm doing it correctly, if transforming every subscription into a query and perform a http request is a correct approach for it. (At least I know that those queries return the data correctly).
Should i just increase the number of VUS/threads until i get a constant timeout to simulate a stress test? There were some test that are causing a graphql error on the website Graphql error, and others were having a
""WARN[0059] Request Failed error="Post \"https://xxxxxxx-xxxxx.herokuapp.com/v1/graphql\": EOF""
in the k6 console.
Or should i just give up with k6/jmeter and try to search for another tool to perfom those test?
Thanks you in advance, and sorry for my English and explanation, but i'm a complete newbie at this.
I'm not sure if i'm doing it correctly, if transforming every
subscription into a query and perform a http request is a correct
approach for it. (At least I know that those queries return the data
correctly).
Ideally you would be using WebSocket as that is what actual clients will most likely be using.
For code samples, check out the answer here.
Here's a more complete example utilizing a main.js entry script with modularized Subscription code in subscriptions\bikes.brands.js. It also uses the Httpx library to set a global request header:
// main.js
import { Httpx } from 'https://jslib.k6.io/httpx/0.0.5/index.js';
import { getBikeBrandsByIdSub } from './subscriptions/bikes-brands.js';
const session = new Httpx({
baseURL: `http://54.227.75.222:8080`
});
const wsUri = 'wss://54.227.75.222:8080/v1/graphql';
const pauseMin = 2;
const pauseMax = 6;
export const options = {};
export default function () {
session.addHeader('Content-Type', 'application/json');
getBikeBrandsByIdSub(1);
}
// subscriptions/bikes-brands.js
import ws from 'k6/ws';
/* using string concatenation */
export function getBikeBrandsByIdSub(id) {
const query = `
subscription getBikeBrandsByIdSub {
bikes_brands(where: {id: {_eq: ${id}}}) {
id
brand
notes
updated_at
created_at
}
}
`;
const subscribePayload = {
id: "1",
payload: {
extensions: {},
operationName: "query",
query: query,
variables: {},
},
type: "start",
}
const initPayload = {
payload: {
headers: {
"content-type": "application/json",
},
lazy: true,
},
type: "connection_init",
};
console.debug(JSON.stringify(subscribePayload));
// start a WS connection
const res = ws.connect(wsUri, initPayload, function(socket) {
socket.on('open', function() {
console.debug('WS connection established!');
// send the connection_init:
socket.send(JSON.stringify(initPayload));
// send the chat subscription:
socket.send(JSON.stringify(subscribePayload));
});
socket.on('message', function(message) {
let messageObj;
try {
messageObj = JSON.parse(message);
}
catch (err) {
console.warn('Unable to parse WS message as JSON: ' + message);
}
if (messageObj.type === 'data') {
console.log(`${messageObj.type} message received by VU ${__VU}: ${Object.keys(messageObj.payload.data)[0]}`);
}
console.log(`WS message received by VU ${__VU}:\n` + message);
});
});
}
Should i just increase the number of VUS/threads until i get a
constant timeout to simulate a stress test?
Timeouts and errors that only happen under load are signals that you may be hitting a bottleneck somewhere. Do you only see the EOFs under load? These are basically the server sending back incomplete responses/closing connections early which shouldn't happen under normal circumstances.
My expectation is that your test should be replicating the real user activity as close as possible. I doubt that real users will be sending requests to GraphQL directly and well-behaved load test must replicate the real life application usage as close as possible.
So I believe you should move to HTTP protocol level and mimic the network footprint of the real browser instead of trying to come up with individual GraphQL queries.
With regards to JMeter and k6 differences it might be the case that k6 produces higher throughput given the same hardware and running requests at maximum speed as it evidenced by kind of benchmark in the Open Source Load Testing Tools 2021 article, however given you're trying to simulate real users using real browsers accessing your applications and the real users don't hammer the application non-stop, they need some time to "think" between operations you should be getting the same number of requests for both load testing tools, if JMeter doesn't give you the load you want to conduct make sure to follow JMeter Best Practices and/or consider running it in distributed mode .
I am struggling with proper coroutine usage on cache handling using spring boot #Cacheable with ehcache on two methods:
calling another service using webclient:
suspend fun getDeviceOwner(correlationId: String, ownerId: String): DeviceOwner{
webClient
.get()
.uri(uriProvider.provideUrl())
.header(CORRELATION_ID, correlationId)
.retrieve()
.onStatus(HttpStatus::isError) {response ->
Mono.error(
ServiceCallExcpetion("Call failed with: ${response.statusCode()}")
)
}.awaitBodyOrNull()
?: throw ServiceCallExcpetion("Call failed with - response is null.")
}
calling db using r2dbc
suspend fun findDeviceTokens(ownerId: UUID, deviceType: String) {
//CoroutineCrudRepository.findTokens
}
What seems to be working for me is calling from:
suspend fun findTokens(data: Data): Collection<String> = coroutineScope {
val ownership = async(Dispatchers.IO, CoroutineStart.LAZY) { service.getDeviceOwner(data.nonce, data.ownerId) }.await()
val tokens = async(Dispatchers.IO, CoroutineStart.LAZY) {service.findDeviceTokens(ownership.ownerId, ownership.ownershipType)}
tokens.await()
}
#Cacheable(value = ["ownerCache"], key = "#ownerId")
fun getDeviceOwner(correlationId: String, ownerId: String)= runBlocking(Dispatchers.IO) {
//webClientCall
}
#Cacheable("deviceCache")
override fun findDeviceTokens(ownerId: UUID, deviceType: String) = runBlocking(Dispatchers.IO) {
//CoroutineCrudRepository.findTokens
}
But from what I am reading it's not good practise to use runBlocking.
https://kotlinlang.org/docs/coroutines-basics.html#your-first-coroutine
Would it block the main thread or the thread which was designated by the parent coroutine?
I also tried with
#Cacheable(value = ["ownerCache"], key = "#ownerId")
fun getDeviceOwnerAsync(correlationId: String, ownerId: String) = GlobalScope.async(Dispatchers.IO, CoroutineStart.LAZY) {
//webClientCall
}
#Cacheable("deviceCache")
override fun findDeviceTokensAsync(ownerId: UUID, deviceType: String) = GlobalScope.async(Dispatchers.IO, CoroutineStart.LAZY) {
//CoroutineCrudRepository.findTokens
}
Both called from suspended function without any additional coroutineScope {} and async{}
suspend fun findTokens(data: Data): Collection<String> =
service.getDeviceOwnerAsync(data.nonce,data.ownerId).await()
.let{service.findDeviceTokensAsync(it.ownerId, it.ownershipType).await()}
I am reading that using GlobalScope is not good practise either due to possible endless run of this coroutine when something stuck or long response (in very simple words). Also in this approach, using GlobalScope, when I tested negative scenarios and external ms call resulted with 404(on purpose) result was not stored in the cache (as I excepted) but for failing CoroutineCrudRepository.findTokens call (throwing exception) Deferred value was cached which is not what I wanted. Storing failing exececution results is not a thing with runBlocking.
I tried also #Cacheable("deviceCache", unless = "#result.isCompleted == true && #result.isCancelled == true")
but it also seems to not work as I would imagine.
Could you please advice the best coroutine approach with correct exception handling for integrating with spring boot caching which will store value in cache only on non failing call?
Although annotations from Spring Cache abstraction are fancy, I also, unfortunately, haven't found any official solution for using them side by side with Kotlin coroutines.
Yet there is a library called spring-kotlin-coroutine that claims to solve this issue. Though, never tried as it doesn't seem to be maintained any longer - the last commit was pushed in May 2019.
For the moment I've been using CacheManager bean and managing the aforementioned manually. I found that a better solution rather than blocking threads.
Sample code with Redis as a cache provider:
Dependency in build.gradle.kts:
implementation("org.springframework.boot:spring-boot-starter-data-redis-reactive")
application.yml configuration:
spring:
redis:
host: redis
port: 6379
password: changeit
cache:
type: REDIS
cache-names:
- account-exists
redis:
time-to-live: 3m
Code:
#Service
class AccountService(
private val accountServiceApiClient: AccountServiceApiClient,
private val redisCacheManager: RedisCacheManager
) {
suspend fun isAccountExisting(accountId: UUID): Boolean {
if (getAccountExistsCache().get(accountId)?.get() as Boolean? == true) {
return true
}
val account = accountServiceApiClient.getAccountById(accountId) // this call is reactive
if (account != null) {
getAccountExistsCache().put(account.id, true)
return true
}
return false
}
private fun getAccountExistsCache() = redisCacheManager.getCache("account-exists") as RedisCache
}
In the Kotlin Coroutines context, every suspend function has 1 additional param of type kotlin.coroutines.Continuation<T>, that's why the org.springframework.cache.interceptor.SimpleKeyGenerator generates always a wrong key. Also, the CacheInterceptor does not know anything about suspend functions, so, it stores a COROUTINE_SUSPENDED object instead of the actual value, without evaluating the suspended wrapper.
You can check this repository https://github.com/konrad-kaminski/spring-kotlin-coroutine, they added Cache support for Coroutines, the specific Cache support implementation is here -> https://github.com/konrad-kaminski/spring-kotlin-coroutine/blob/master/spring-kotlin-coroutine/src/main/kotlin/org/springframework/kotlin/coroutine/cache/CoroutineCacheConfiguration.kt.
Take a look at CoroutineCacheInterceptor and CoroutineAwareSimpleKeyGenerator,
Hope this fixes your issue
I am having fun with using moleculer-runner instead of creating a ServiceBroker instance in a moleculer-web project I am working on. The Runner simplifies setting up services for moleculer-web, and all the services - including the api.service.js file - look and behave the same, using a module.exports = { blah } format.
I can cleanly define the REST endpoints in the api.service.js file, and create the connected functions in the appropriate service files. For example aliases: { 'GET sensors': 'sensors.list' } points to the list() action/function in sensors.service.js . It all works great using some dummy data in an array.
The next step is to get the service(s) to open up a socket and talk to a local program listening on an internal set address/port. The idea is to accept a REST call from the web, talk to a local program over a socket to get some data, then format and return the data back via REST to the client.
BUT When I want to use sockets with moleculer, I'm having trouble finding useful info and examples on integrating moleculer-io with a moleculer-runner-based setup. All the examples I find use the ServiceBroker model. I thought my Google-Fu was pretty good, but I'm at a loss as to where to look to next. Or, can i modify the ServiceBroker examples to work with moleculer-runner? Any insight or input is welcome.
If you want the following chain:
localhost:3000/sensor/list -> sensor.list() -> send message to local program:8071 -> get response -> send response as return message to the REST caller.
Then you need to add a socket io client to your sensor service (which has the list() action). Adding a client will allow it to communicate with "outside world" via sockets.
Check the image below. I think it has everything that you need.
As a skeleton I've used moleculer-demo project.
What I have:
API service api.service.js. That handles the HTTP requests and passes them to the sensor.service.js
The sensor.service.js will be responsible for communicating with remote socket.io server so it needs to have a socket.io client. Now, when the sensor.service.js service has started() I'm establishing a connection with a remote server located at port 8071. After this I can use this connection in my service actions to communicate with socket.io server. This is exactly what I'm doing in sensor.list action.
I've also created remote-server.service.js to mock your socket.io server. Despite being a moleculer service, the sensor.service.js communicates with it via socket.io protocol.
It doesn't matter if your services use (or not) socket.io. All the services are declared in the same way, i.e., module.exports = {}
Below is a working example with socket.io.
const { ServiceBroker } = require("moleculer");
const ApiGateway = require("moleculer-web");
const SocketIOService = require("moleculer-io");
const io = require("socket.io-client");
const IOService = {
name: "api",
// SocketIOService should be after moleculer-web
// Load the HTTP API Gateway to be able to reach "greeter" action via:
// http://localhost:3000/hello/greeter
mixins: [ApiGateway, SocketIOService]
};
const HelloService = {
name: "hello",
actions: {
greeter() {
return "Hello Via Socket";
}
}
};
const broker = new ServiceBroker();
broker.createService(IOService);
broker.createService(HelloService);
broker.start().then(async () => {
const socket = io("http://localhost:3000", {
reconnectionDelay: 300,
reconnectionDelayMax: 300
});
socket.on("connect", () => {
console.log("Connection with the Gateway established");
});
socket.emit("call", "hello.greeter", (error, res) => {
console.log(res);
});
});
To make it work with moleculer-runner just copy the service declarations into my-service.service.js. So for example, your api.service.js could look like:
// api.service.js
module.exports = {
name: "api",
// SocketIOService should be after moleculer-web
// Load the HTTP API Gateway to be able to reach "greeter" action via:
// http://localhost:3000/hello/greeter
mixins: [ApiGateway, SocketIOService]
}
and your greeter service:
// greeter.service.js
module.exports = {
name: "hello",
actions: {
greeter() {
return "Hello Via Socket";
}
}
}
And run npm run dev or moleculer-runner --repl --hot services
I'm struggling with connecting two sockets:
frontend (ROUTER) - which handles clients request and forward them to backend
backend (ROUTER) - which receives request from frontend and deals with them with the use of number of workers ( which require some initialization, configuration etc).
The server code looks like this:
void server_task::run() {
frontend.bind("tcp://*:5570");
backend.bind("inproc://backend");
zmq::pollitem_t items[] = {
{ frontend, 0, ZMQ_POLLIN, 0 },
{ backend, 0, ZMQ_POLLIN, 0}
};
try {
while (true) {
zmq::poll(&items[0], 2, -1);
if (items[0].revents & ZMQ_POLLIN) {
frontend_h();
}
if (items[1].revents & ZMQ_POLLIN) {
backend_h();
}
}
}
catch (std::exception& e) {
LOG(info) << e.what();
}
}
frontend_h and backend_h are handler classes, each having access to both sockets.
The question is:
Considering synchronous execution of frontend_h() and backend_h() how can I forward the request dealt in frontend_h() to backend_h()?
I tried to simply re-send the message using backend socket like that:
void frontend_handler::handle_query(std::unique_ptr<zmq::message_t> identity, std::unique_ptr<zmq::message_t> request) {
zmq::message_t req_msg, req_identity;
req_msg.copy(request.get());
req_identity.copy(identity.get());
zmq::message_t header = create_header(request_type::REQ_QUERY);
backend.send(header, ZMQ_SNDMORE);
backend.send(message);
}
But it stucks on zmq::poll in run() after the execution of handle_query().
Stucks on zmq::poll()?
Your code has instructed the .poll() method to block, exactly as documentation states:
If the value of timeout is -1, zmq_poll() shall block indefinitely until a requested event has occurred...
How can I forward the request?
It seems pretty expensive to re-marshall each message ( +1 for using at least the .copy() method and avoiding re-packing overheads ) once your code is co-located and the first, receiving handler, can request and invoke any appropriate method of the latter directly ( and without any Context()-processing associated efforts and overheads.
I want to be able to click a button on a website, have it represent a command, issue that command to my program via a websocket, have my program process that command (which will produce a side effect), and then return the results of that command to the website to be rendered.
The websocket would be responsible for updating state changes applied by different actors that are within the users view.
Example: Changing AI instructions via the website. This modifies some values, which would get reported back to the website. Other users might change other AI instructions, or the AI would react to current conditions changing position, requiring the client to update the screen.
I was thinking I could have an actor responsible for updating the client with changed information, and just have the receiving stream update the state with the changes?
Is this the right library to use? Is there a better method to achieve what I want?
You can use akka-streams and akka-http for this just fine. An example when using an actor as a handler:
package test
import akka.actor.{Actor, ActorRef, ActorSystem, Props, Stash, Status}
import akka.http.scaladsl.Http
import akka.http.scaladsl.model.ws.{Message, TextMessage}
import akka.http.scaladsl.server.Directives._
import akka.stream.scaladsl.{Flow, Sink, Source, SourceQueueWithComplete}
import akka.stream.{ActorMaterializer, OverflowStrategy, QueueOfferResult}
import akka.pattern.pipe
import scala.concurrent.{ExecutionContext, Future}
import scala.io.StdIn
object Test extends App {
implicit val actorSystem = ActorSystem()
implicit val materializer = ActorMaterializer()
implicit def executionContext: ExecutionContext = actorSystem.dispatcher
val routes =
path("talk") {
get {
val handler = actorSystem.actorOf(Props[Handler])
val flow = Flow.fromSinkAndSource(
Flow[Message]
.filter(_.isText)
.mapAsync(4) {
case TextMessage.Strict(text) => Future.successful(text)
case TextMessage.Streamed(textStream) => textStream.runReduce(_ + _)
}
.to(Sink.actorRefWithAck[String](handler, Handler.Started, Handler.Ack, Handler.Completed)),
Source.queue[String](16, OverflowStrategy.backpressure)
.map(TextMessage.Strict)
.mapMaterializedValue { queue =>
handler ! Handler.OutputQueue(queue)
queue
}
)
handleWebSocketMessages(flow)
}
}
val bindingFuture = Http().bindAndHandle(routes, "localhost", 8080)
println("Started the server, press enter to shutdown")
StdIn.readLine()
bindingFuture
.flatMap(_.unbind())
.onComplete(_ => actorSystem.terminate())
}
object Handler {
case object Started
case object Completed
case object Ack
case class OutputQueue(queue: SourceQueueWithComplete[String])
}
class Handler extends Actor with Stash {
import context.dispatcher
override def receive: Receive = initialReceive
def initialReceive: Receive = {
case Handler.Started =>
println("Client has connected, waiting for queue")
context.become(waitQueue)
sender() ! Handler.Ack
case Handler.OutputQueue(queue) =>
println("Queue received, waiting for client")
context.become(waitClient(queue))
}
def waitQueue: Receive = {
case Handler.OutputQueue(queue) =>
println("Queue received, starting")
context.become(running(queue))
unstashAll()
case _ =>
stash()
}
def waitClient(queue: SourceQueueWithComplete[String]): Receive = {
case Handler.Started =>
println("Client has connected, starting")
context.become(running(queue))
sender() ! Handler.Ack
unstashAll()
case _ =>
stash()
}
case class ResultWithSender(originalSender: ActorRef, result: QueueOfferResult)
def running(queue: SourceQueueWithComplete[String]): Receive = {
case s: String =>
// do whatever you want here with the received message
println(s"Received text: $s")
val originalSender = sender()
queue
.offer("some response to the client")
.map(ResultWithSender(originalSender, _))
.pipeTo(self)
case ResultWithSender(originalSender, result) =>
result match {
case QueueOfferResult.Enqueued => // okay
originalSender ! Handler.Ack
case QueueOfferResult.Dropped => // due to the OverflowStrategy.backpressure this should not happen
println("Could not send the response to the client")
originalSender ! Handler.Ack
case QueueOfferResult.Failure(e) =>
println(s"Could not send the response to the client: $e")
context.stop(self)
case QueueOfferResult.QueueClosed =>
println("Outgoing connection to the client has closed")
context.stop(self)
}
case Handler.Completed =>
println("Client has disconnected")
queue.complete()
context.stop(self)
case Status.Failure(e) =>
println(s"Client connection has failed: $e")
e.printStackTrace()
queue.fail(new RuntimeException("Upstream has failed", e))
context.stop(self)
}
}
There are lots of places here which could be tweaked, but the basic idea remains the same. Alternatively, you could implement the Flow[Message, Message, _] required by the handleWebSocketMessages() method by using GraphStage. Everything used above is also described in detail in akka-streams documentation.