Spring Data Redis - watch and multi on same cluster node - spring

My objective is to apply transaction logic (watch+multi) to a redis cluster. I can see here, here and on spring-data-redis repo that transactions on clusters are not supported on spring data redis. Nevertheless, considering that I need to do it on the same node, I have tried to do something like this:
val keySerialized = "myKey".toByteArray()
val valueSerialized = "myValue".toByteArray()
val node = redisTemplate.connectionFactory.clusterConnection.clusterGetNodeForKey(keySerialized)
val clusterExecutor = (redisTemplate.connectionFactory.clusterConnection as LettuceClusterConnection).clusterCommandExecutor
clusterExecutor.executeCommandOnSingleNode(
(ClusterCommandExecutor.ClusterCommandCallback { client: RedisCommands<ByteArray, ByteArray> -> client.watch(keySerialized) })
,node)
clusterExecutor.executeCommandOnSingleNode(
(ClusterCommandExecutor.ClusterCommandCallback { client: RedisCommands<ByteArray, ByteArray> -> client.multi() })
,node)
clusterExecutor.executeCommandOnSingleNode(
(ClusterCommandExecutor.ClusterCommandCallback { client: RedisCommands<ByteArray, ByteArray> -> client.get(keySerialized) })
,node)
clusterExecutor.executeCommandOnSingleNode(
(ClusterCommandExecutor.ClusterCommandCallback { client: RedisCommands<ByteArray, ByteArray> -> client.set(keySerialized, valueSerialized) })
,node)
clusterExecutor.executeCommandOnSingleNode(
(ClusterCommandExecutor.ClusterCommandCallback { client: RedisCommands<ByteArray, ByteArray> -> client.exec() })
,node)
This works fine when I'm running just one instance of the code. Example: If I do a SET in the redis console while debugging the code, the transaction will fail as expected.
However, If I am running this code in two threads, what happens is that both are using the same connection. When the second thread runs the MULTI command, the following error is raised
Caused by: io.lettuce.core.RedisCommandExecutionException: ERR MULTI calls can not be nested
I believe that by forcing the executor to use a new connection could be a solution, but I don't know how to do it. Any thoughs on this?

Related

Kotlin JobCancellationException in Spring REST Client with async call

From time to time Spring REST function fails with: "kotlinx.coroutines.JobCancellationException: MonoCoroutine was cancelled".
It is suspend function which calls another service using spring-webflux client. There are multiple suspend functions in my rest class. Looks like this problem occurs when multiple requests arrive to the same time. But may be not :-)
Application runs on Netty server.
Example:
#GetMapping("/customer/{id}")
suspend fun getCustomer(#PathVariable #NotBlank id: String): ResponseEntity<CustomerResponse> =
withContext(MDCContext()) {
ResponseEntity.status(HttpStatus.OK)
.body(customerService.aggregateCustomer(id))
}
Service call:
suspend fun executeServiceCall(vararg urlData: Input) = webClient
.get()
.uri(properties.url, *urlData)
.retrieve()
.bodyToMono(responseTypeRef)
.retryWhen(
Retry.fixedDelay(properties.retryCount, properties.retryBackoff)
.onRetryExhaustedThrow { _, retrySignal ->
handleRetryException(retrySignal)
}
.filter { it is ReadTimeoutException || it is ConnectTimeoutException }
)
.onErrorMap {
// throw exception
}
.awaitFirstOrNull()
Part of Stack Trace:
Caused by: kotlinx.coroutines.JobCancellationException: MonoCoroutine was cancelled; job="coroutine#1":MonoCoroutine{Cancelling}#650774ce
at kotlinx.coroutines.JobSupport.cancel(JobSupport.kt:1578)
at kotlinx.coroutines.Job$DefaultImpls.cancel$default(Job.kt:183)
at kotlinx.coroutines.reactor.MonoCoroutine.dispose(Mono.kt:122)
at reactor.core.publisher.FluxCreate$SinkDisposable.dispose(FluxCreate.java:1033)
at reactor.core.publisher.MonoCreate$DefaultMonoSink.disposeResource(MonoCreate.java:313)
at reactor.core.publisher.MonoCreate$DefaultMonoSink.cancel(MonoCreate.java:300)

spring boot cachable, ehcache with Kotlin coroutines - best practises

I am struggling with proper coroutine usage on cache handling using spring boot #Cacheable with ehcache on two methods:
calling another service using webclient:
suspend fun getDeviceOwner(correlationId: String, ownerId: String): DeviceOwner{
webClient
.get()
.uri(uriProvider.provideUrl())
.header(CORRELATION_ID, correlationId)
.retrieve()
.onStatus(HttpStatus::isError) {response ->
Mono.error(
ServiceCallExcpetion("Call failed with: ${response.statusCode()}")
)
}.awaitBodyOrNull()
?: throw ServiceCallExcpetion("Call failed with - response is null.")
}
calling db using r2dbc
suspend fun findDeviceTokens(ownerId: UUID, deviceType: String) {
//CoroutineCrudRepository.findTokens
}
What seems to be working for me is calling from:
suspend fun findTokens(data: Data): Collection<String> = coroutineScope {
val ownership = async(Dispatchers.IO, CoroutineStart.LAZY) { service.getDeviceOwner(data.nonce, data.ownerId) }.await()
val tokens = async(Dispatchers.IO, CoroutineStart.LAZY) {service.findDeviceTokens(ownership.ownerId, ownership.ownershipType)}
tokens.await()
}
#Cacheable(value = ["ownerCache"], key = "#ownerId")
fun getDeviceOwner(correlationId: String, ownerId: String)= runBlocking(Dispatchers.IO) {
//webClientCall
}
#Cacheable("deviceCache")
override fun findDeviceTokens(ownerId: UUID, deviceType: String) = runBlocking(Dispatchers.IO) {
//CoroutineCrudRepository.findTokens
}
But from what I am reading it's not good practise to use runBlocking.
https://kotlinlang.org/docs/coroutines-basics.html#your-first-coroutine
Would it block the main thread or the thread which was designated by the parent coroutine?
I also tried with
#Cacheable(value = ["ownerCache"], key = "#ownerId")
fun getDeviceOwnerAsync(correlationId: String, ownerId: String) = GlobalScope.async(Dispatchers.IO, CoroutineStart.LAZY) {
//webClientCall
}
#Cacheable("deviceCache")
override fun findDeviceTokensAsync(ownerId: UUID, deviceType: String) = GlobalScope.async(Dispatchers.IO, CoroutineStart.LAZY) {
//CoroutineCrudRepository.findTokens
}
Both called from suspended function without any additional coroutineScope {} and async{}
suspend fun findTokens(data: Data): Collection<String> =
service.getDeviceOwnerAsync(data.nonce,data.ownerId).await()
.let{service.findDeviceTokensAsync(it.ownerId, it.ownershipType).await()}
I am reading that using GlobalScope is not good practise either due to possible endless run of this coroutine when something stuck or long response (in very simple words). Also in this approach, using GlobalScope, when I tested negative scenarios and external ms call resulted with 404(on purpose) result was not stored in the cache (as I excepted) but for failing CoroutineCrudRepository.findTokens call (throwing exception) Deferred value was cached which is not what I wanted. Storing failing exececution results is not a thing with runBlocking.
I tried also #Cacheable("deviceCache", unless = "#result.isCompleted == true && #result.isCancelled == true")
but it also seems to not work as I would imagine.
Could you please advice the best coroutine approach with correct exception handling for integrating with spring boot caching which will store value in cache only on non failing call?
Although annotations from Spring Cache abstraction are fancy, I also, unfortunately, haven't found any official solution for using them side by side with Kotlin coroutines.
Yet there is a library called spring-kotlin-coroutine that claims to solve this issue. Though, never tried as it doesn't seem to be maintained any longer - the last commit was pushed in May 2019.
For the moment I've been using CacheManager bean and managing the aforementioned manually. I found that a better solution rather than blocking threads.
Sample code with Redis as a cache provider:
Dependency in build.gradle.kts:
implementation("org.springframework.boot:spring-boot-starter-data-redis-reactive")
application.yml configuration:
spring:
redis:
host: redis
port: 6379
password: changeit
cache:
type: REDIS
cache-names:
- account-exists
redis:
time-to-live: 3m
Code:
#Service
class AccountService(
private val accountServiceApiClient: AccountServiceApiClient,
private val redisCacheManager: RedisCacheManager
) {
suspend fun isAccountExisting(accountId: UUID): Boolean {
if (getAccountExistsCache().get(accountId)?.get() as Boolean? == true) {
return true
}
val account = accountServiceApiClient.getAccountById(accountId) // this call is reactive
if (account != null) {
getAccountExistsCache().put(account.id, true)
return true
}
return false
}
private fun getAccountExistsCache() = redisCacheManager.getCache("account-exists") as RedisCache
}
In the Kotlin Coroutines context, every suspend function has 1 additional param of type kotlin.coroutines.Continuation<T>, that's why the org.springframework.cache.interceptor.SimpleKeyGenerator generates always a wrong key. Also, the CacheInterceptor does not know anything about suspend functions, so, it stores a COROUTINE_SUSPENDED object instead of the actual value, without evaluating the suspended wrapper.
You can check this repository https://github.com/konrad-kaminski/spring-kotlin-coroutine, they added Cache support for Coroutines, the specific Cache support implementation is here -> https://github.com/konrad-kaminski/spring-kotlin-coroutine/blob/master/spring-kotlin-coroutine/src/main/kotlin/org/springframework/kotlin/coroutine/cache/CoroutineCacheConfiguration.kt.
Take a look at CoroutineCacheInterceptor and CoroutineAwareSimpleKeyGenerator,
Hope this fixes your issue

Examples of integrating moleculer-io with moleculer-web using moleculer-runner instead of ServiceBroker?

I am having fun with using moleculer-runner instead of creating a ServiceBroker instance in a moleculer-web project I am working on. The Runner simplifies setting up services for moleculer-web, and all the services - including the api.service.js file - look and behave the same, using a module.exports = { blah } format.
I can cleanly define the REST endpoints in the api.service.js file, and create the connected functions in the appropriate service files. For example aliases: { 'GET sensors': 'sensors.list' } points to the list() action/function in sensors.service.js . It all works great using some dummy data in an array.
The next step is to get the service(s) to open up a socket and talk to a local program listening on an internal set address/port. The idea is to accept a REST call from the web, talk to a local program over a socket to get some data, then format and return the data back via REST to the client.
BUT When I want to use sockets with moleculer, I'm having trouble finding useful info and examples on integrating moleculer-io with a moleculer-runner-based setup. All the examples I find use the ServiceBroker model. I thought my Google-Fu was pretty good, but I'm at a loss as to where to look to next. Or, can i modify the ServiceBroker examples to work with moleculer-runner? Any insight or input is welcome.
If you want the following chain:
localhost:3000/sensor/list -> sensor.list() -> send message to local program:8071 -> get response -> send response as return message to the REST caller.
Then you need to add a socket io client to your sensor service (which has the list() action). Adding a client will allow it to communicate with "outside world" via sockets.
Check the image below. I think it has everything that you need.
As a skeleton I've used moleculer-demo project.
What I have:
API service api.service.js. That handles the HTTP requests and passes them to the sensor.service.js
The sensor.service.js will be responsible for communicating with remote socket.io server so it needs to have a socket.io client. Now, when the sensor.service.js service has started() I'm establishing a connection with a remote server located at port 8071. After this I can use this connection in my service actions to communicate with socket.io server. This is exactly what I'm doing in sensor.list action.
I've also created remote-server.service.js to mock your socket.io server. Despite being a moleculer service, the sensor.service.js communicates with it via socket.io protocol.
It doesn't matter if your services use (or not) socket.io. All the services are declared in the same way, i.e., module.exports = {}
Below is a working example with socket.io.
const { ServiceBroker } = require("moleculer");
const ApiGateway = require("moleculer-web");
const SocketIOService = require("moleculer-io");
const io = require("socket.io-client");
const IOService = {
name: "api",
// SocketIOService should be after moleculer-web
// Load the HTTP API Gateway to be able to reach "greeter" action via:
// http://localhost:3000/hello/greeter
mixins: [ApiGateway, SocketIOService]
};
const HelloService = {
name: "hello",
actions: {
greeter() {
return "Hello Via Socket";
}
}
};
const broker = new ServiceBroker();
broker.createService(IOService);
broker.createService(HelloService);
broker.start().then(async () => {
const socket = io("http://localhost:3000", {
reconnectionDelay: 300,
reconnectionDelayMax: 300
});
socket.on("connect", () => {
console.log("Connection with the Gateway established");
});
socket.emit("call", "hello.greeter", (error, res) => {
console.log(res);
});
});
To make it work with moleculer-runner just copy the service declarations into my-service.service.js. So for example, your api.service.js could look like:
// api.service.js
module.exports = {
name: "api",
// SocketIOService should be after moleculer-web
// Load the HTTP API Gateway to be able to reach "greeter" action via:
// http://localhost:3000/hello/greeter
mixins: [ApiGateway, SocketIOService]
}
and your greeter service:
// greeter.service.js
module.exports = {
name: "hello",
actions: {
greeter() {
return "Hello Via Socket";
}
}
}
And run npm run dev or moleculer-runner --repl --hot services

Akka HTTP WebSocket client equivalent of this node.js

I have some user documentation that expresses how to use a websocket with this node snippet:
var socket = io(“HOST:PORT”);
socket.on('request-server', function() {
socket.emit('server-type', 'red')
});
What would the equivalent client code be in Akka HTTP?
I have derived the following from the example in the Akka documentation. It isn't quite what I'd like to write, because
I think I need to connect and wait for the request-server event before sending any events & I don't know how to do that
I don't know how to format the TextMessages in the Source to be equivalent to `socket.emit('server-type', 'red').
It only prints "closed"
implicit val system = ActorSystem()
implicit val materializer = ActorMaterializer()
import system.dispatcher
val incoming: Sink[Message, Future[Done]] = Sink.foreach[Message] {
case message: TextMessage.Strict => println(message.text)
case z => println(z)
}
val outgoing = Source(List(TextMessage("'server-type': 'red'")))
val webSocketFlow = Http().webSocketClientFlow(
WebSocketRequest("ws://localhost:3000/socket.io"))
val (upgradeResponse, closed) =
outgoing
.viaMat(webSocketFlow)(Keep.right)
.toMat(incoming)(Keep.both)
.run()
val connected = upgradeResponse.flatMap { upgrade =>
if (upgrade.response.status == StatusCodes.SwitchingProtocols) {
Future.successful(Done)
} else {
throw new RuntimeException(s"Connection failed: ${upgrade.response.status}")
}
}
connected.onComplete(println)
closed.foreach(_ => println("closed"))
What is the Akka client equivalent to the given socket.io code?
Your connection is getting closed immediately after sending message "outgoing".
Check out Half-Closed Websockets here http://doc.akka.io/docs/akka-http/10.0.0/scala/http/client-side/websocket-support.html#half-closed-websockets

How to use masstransit's retry policy with sagas?

Configuring a RetryPolicy inside a ReceiveEndpoint of a queue used for store messages from commands and events (as appears bellow) appears does not works when the queue is a saga endpoint queue.
This configuration works fine (note the endpoint RegisterOrderServiceQueue):
...
var bus = BusConfigurator.ConfigureBus((cfg, host) =>
{
cfg.ReceiveEndpoint(host, RabbitMqConstants.RegisterOrderServiceQueue, e =>
{
e.UseRetry(Retry.Except<ArgumentException>().Immediate(3));
...
...Bus RetryPolicy configuration on windows service to run the Saga state machine does not work (note the endpoint SagaQueue):
...
var bus = BusConfigurator.ConfigureBus((cfg, host) =>
{
cfg.ReceiveEndpoint(host, RabbitMqConstants.SagaQueue, e =>
{
e.UseRetry(Retry.Except<ArgumentException>().Immediate(3));
...
StateMachine class source code that throws an ArgumentException:
...
During(Registered,
When(ApproveOrder)
.Then(context =>
{
throw new ArgumentException("Test for monitoring sagas");
context.Instance.EstimatedTime = context.Data.EstimatedTime;
context.Instance.Status = context.Data.Status;
})
.TransitionTo(Approved),
...
But when ApproveOrder occurs the RetryPolicy rules are ignored, and connecting a ConsumeObserver in the bus that Sagas is connected, the ConsumeFault's method is executed 5 times (that is the default behavior of masstransit).
This should work? There is any missconception on my configurations?

Resources