At startup, I check for some data and if not present attempt to save some defaults (temporarily for testing).
val subs = repo.findAll().toIterable()
if(subs.none()) {
repo.saveAll(defaults.map { Source(it.link.hashCode().toLong(), it::class.java.canonicalName, arrayOf(it.link)) }).blockLast()
}
On the first run, we will reach the saveAll() but never unblock. The data is saved in MongoDB, and I can confirm it with Robo 3t.
Subsequent runs with data actually present will lead to the first findAll never unblocking.
Profiling in MongoDB appears to show a successful query.
Profile of findAll() query
My Respository and Entity are as follows:
interface SourceRepository : ReactiveCrudRepository<Source, Long> {
//
}
data class Source(
#Id val id: Long,
val type: String,
val params: Array<String>
)
This is in Kotlin, against Spring Boot 2.0.0.M4. I am targeting a MongoDB instance running in docker. If I remove this bit of startup logic, my other ReactiveCrudRepository is able to read/write just fine, never blocking.
The working Repository's saveAll() call also concludes in a blockLast(), as I found that without this the save would never actually occur.
Related
I'm using kotlin exposed and spring acl: JdbcMutableAclService.
The dummy code looks like:
transaction {
//operation 1
dao.updateSomething(resourceId)
val sids = dao.getUserIdsByResourceId(resourceId)
//operation 2
val pObjectIdentity = ObjectIdentityImpl(PROJECT, resourceId)
val pMutableAcl = aclService.readAclById(pObjectIdentity) as MutableAcl
var i = pMutableAcl.entries.size
sids.forEach {
pMutableAcl.insertAce(i++, BasePermission.READ, PrincipalSid(it), true)
}
aclService.updateAcl(pMutableAcl)
//operation 3
val rObjectIdentity = ObjectIdentityImpl(RESOURCE, resourceId)
val rMutableAcl = aclService.readAclById(rObjectIdentity) as MutableAcl
var i = rMutableAcl.entries.size
sids.forEach {
rMutableAcl.insertAce(i++, BasePermission.READ, PrincipalSid(it), true)
}
aclService.updateAcl(rMutableAcl)
}
If something happens in operation 3 - it won't write nothing to db, the outer transaction will also rolled back, and operation 1 won't be committed as well.
Unfortunately operation 2 won't be rolled back.
So my assumption every time of using updateAcl it creates its own isolated transaction.
I don't know how it work in case of Spring Jpa, and #Transactional annotation (is JdbcMutableAclService take into consideration outer transaction or not), but in case of Exposed it is not.
Is it correct behaviour at all? Should every acl update be an isolated transaction?
Is there a way to integrate Exposed and JdbcMutableAclService without implementing my own MutableAclService?
UPD for #Tapac
I'm using org.jetbrains.exposed:exposed-spring-boot-starter without any additional configuration, so based on ExposedAutoConfiguration it is org.jetbrains.exposed.spring.SpringTransactionManager.
But during the debugging i saw in stacktrace some refs to ThreadLocalTransactionManager.
And don't know is it useful information, but i don't use spring transaction annotation and instead of that i use exposed transaction{} block.
I am having troubles when running last test against a Kotin Spring Boot application. One critical object has a ManyToMany relationship to another one:
#Entity
data class Subscriber(
#ManyToMany
#JoinTable(name = "subscriber_message",
joinColumns = [JoinColumn(name = "subscriber_id")],
inverseJoinColumns = [JoinColumn(name = "message_id")],
indexes =[
Index(name = "idx_subscriber_message_message_id", columnList = "message_id"),
Index(name = "idx_subscriber_message_subscriber_id", columnList = "subscriber_id")
]
)
val messages: MutableList<Message>
}
#Entity
data class Message(
)
Messages are added to a subscriber like this:
subscriber.messages.add(message)
save(subscriber)
When a new message arrives, the controller asks the repository to add the message to all existing subscribers by calling the following method:
#Synchronized
fun SubscriberRepository.addMessage(message: Message) {
findAll().forEach {
it.messages.add(message)
save(it)
}
}
We currently use MutableList for this property since we need to add new elements to the list. This type is not thread safe, so I tried the good old Java concurrent set java.util.concurrent.ConcurrentSkipListSet, to which Spring complained saying that this type is not supported.
Is there a better way to stay thread safe in a web application regarding such a relationship. The list of messages is consumed by a subscriber who then clears it from messages it is done with. But because of concurrency, the cleaning process doesn't work either!
I don't think JPA is going to allow it to be you to initialise it with any concurrent versions because hibernate has own implementations like PersistentSet etc for each collection type which it supports.
When adding a message, you already have synchronised method so that should be fine.
Now I guess many threads trying to consume the messages from the retrieved subscriber. So why don't you do modify the subscriber like this before giving it to the threads consume it.
retrieve subscriber with messages
subscriber.setMessages(Collections.synchronizedList(subscriber.getMessages())) and give it to threads (I don't know what is the equivalent in kotlin)
So now subscriber.messages.removeAll(confirmedMessages) will be thread safe
I'm working on a Spring Boot (2.2) project using Kotlin, with CouchDB as (reactive) database, and in consequence, async DAO (either suspend functions, or functions returning a Flow). I'm trying to setup WebFlux in order to have async controllers too (again, I want to return Flows, not Flux). But I'm having troubles retrieving my security context from ReactiveSecurityContextHolder.
From what I've read, unlike SecurityContextHolder which is using ThreadLocal to store it, ReactiveSecurityContextHolder relies on the fact that Spring, while making a subscription to my reactive chain, also stored that context inside this chain, thus allowing me to call ReactiveSecurityContextHolder.getContext() from within the chain.
The problem is that I have to transform my Mono<SecurityContext> into a Flow at some point, which makes me loose my SecurityContext. So my question is: is there a way to have a Spring Boot controller returning a Flow while retrieving the security context from ReactiveSecurityContextHolder inside my logic? Basically, after simplification, it should look like this:
#GetMapping
fun getArticles(): Flow<String> {
return ReactiveSecurityContextHolder.getContext().flux().asFlow() // returns nothing
}
Note that if I return the Flux directly (skipping the .asFlow()), or add a .single() or .toList() in the end (hence using a suspend fun), then it works fine and my security context is returned, but again that's not what I want. I guess the solution would be to transfer the context from the Flux (initial reactive chain from ReactiveSecurityContextHolder) to the Flow, but it doesn't seem to be done by default.
Edit: here is a sample project showcasing the problem: https://github.com/Simon3/webflux-kotlin-sample
What you really try to achieve is accessing your ReactorContext from inside a Flow.
One way to do this is to relax the need for returning a Flow and return a Flux instead. This allows you to recover the ReactorContext and pass it to the Flow you are going to use to generate your data.
#ExperimentalCoroutinesApi
#GetMapping("/flow")
fun flow(): Flux<Map<String, String>> = Mono.subscriberContext().flatMapMany { reactorCtx ->
flow {
val ctx = coroutineContext[ReactorContext.Key]?.context?.get<Mono<SecurityContext>>(SecurityContext::class.java)?.asFlow()?.single()
emit(mapOf("user" to ((ctx?.authentication?.principal as? User)?.username ?: "<NONE>")))
}.flowOn(reactorCtx.asCoroutineContext()).asFlux()
}
In the case when you need to access the ReactorContext from a suspend method, you can simply get it back from the coroutineContext with no further artifice:
#ExperimentalCoroutinesApi
#GetMapping("/suspend")
suspend fun suspend(): Map<String,String> {
val ctx = coroutineContext[ReactorContext.Key]?.context?.get<Mono<SecurityContext>>(SecurityContext::class.java)?.asFlow()?.single()
return mapOf("user" to ((ctx?.authentication?.principal as? User)?.username ?: "<NONE>"))
}
When operating on large data sets, Spring Data presents two abstractions: Stream and Page. We've been using Stream for awhile and had no issues, but recently I wanted to try a paginated approach and ran into a reliability issue.
Consider the following:
#Entity
public class MyData {
}
public interface MyDataRepository extends JpaRepository<MyData, UUID> {
}
#Component
public class MyDataService {
private MyDataRepository repository;
// Bridge between a Reactive service and a transactional / non-reactive database call
#Transactional
public void getAllMyData(final FluxSink<MyData> sink) {
final Pageable firstPage = PageRequest.of(0, 500);
Page<MyData> page = repository.findAll(firstPage);
while (page != null && page.hasContent()) {
page.getContent().forEach(sink::next);
if (page.hasNext()) {
page = repository.findAll(page.nextPageable());
}
else {
page = null;
}
}
sink.complete();
}
}
Using two Postgres 9.5 databases, the source database had close to 100,000 rows while the destination was empty. The example code was then used to copy from the source to the destination. At the end I would find that my destination database had far smaller row count than the source.
Run as a springboot app
The flux doing the copy was using 4-6 threads in parallel (for speed)
Total run time of at least an hour (max was 2 hours)
As it turns out, I was eventually processing the same rows multiple times (and missing other rows as a result). This lead me to discovering a fix that others had already ran into, where you should provide a Sort.by("") argument.
After changing the service to use:
// Make our pages sorted by the PKEY
final Pageable firstPage = PageRequest.of(0, 500, Sort.by("id"));
I found that while it GREATLY helped, I would still process multiple rows (from losing about half the rows to only seeing ~12 duplicates). When I use a Stream instead, I have no issues.
Does anyone have any explanation for what is going on? I don't seem to have any duplicates come through until the test has been running for at least 10-15min, which almost leads me to believe that there is some kind of session or other timeout happening (either in the client, or on the database) that causes the hiccups. But I'm really far out of my knowledge area for troubleshooting it further heh.
I have a problem with async controllers in Grails. Consider the following controller:
#Transactional(readOnly=true)
class RentController {
def myService
UserProperties props
def beforeInterceptor = {
this.props = fetchUserProps()
}
//..other actions
#Transactional
def rent(Long id) {
//check some preconditions here, calling various service methods...
if (!allOk) {
render status: 403, text: 'appropriate.message.key'
return
}
//now we long poll because most of the time the result will be
//success within a couple of seconds
AsyncContext ctx = startAsync()
ctx.timeout = 5 * 1000 * 60 + 5000
ctx.start {
try {
//wait for external service to confirm - can take a long time or even time out
//save appropriate domain objects if successful
//placeRental is also marked with #Transactional (if that makes any difference)
def result = myService.placeRental()
if (result.success) {
render text:"OK", status: 200
} else {
render status:400, text: "rejection.reason.${result.rejectionCode}"
}
} catch (Throwable t) {
log.error "Rental process failed", t
render text: "Rental process failed with exception ${t?.message}", status: 500
} finally {
ctx.complete()
}
}
}
}
The controller and service code appear to work fine (though the above code is simplified) but will sometimes cause a database session to get 'stuck in the past'.
Let's say I have a UserProperties instance whose property accountId is updated from 1 to 20 somewhere else in the application while a rent action is waiting in the async block. As the async block eventually terminates one way or another (it may succeed, fail or time out), the app will sometimes get a stale UserProperties instance with accountId: 1. Let's say I refresh the updated user's properties page, I will see accountId: 1 about 1 time per 10 refreshes while the rest of the time it will be 20 - and this is on my development machine where noone else is accessing the application (though the same behaviour can be observed in production). My connection pool also holds 10 connections so I suspect there may be a correlation here.
Other strange things will happen - for example, I will get StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect) from actions doing something as simple as render (UserProperties.list() as JSON) - after the response had already rendered (successfuly apart from the noise in the logs) and despite the action being annotated with #Transactional(readOnly=true).
A stale session doesn't seem to appear every time and so far our solution was to restart the server every evening (the app has few users for now), but the error is annoying and the cause was hard to pinpoint. My guess is that a DB transaction doesn't get committed or rolled back because of the async code, but GORM, Spring and Hibernate have many nooks and crannies where things could get stuck.
We're using Postgres 9.4.1 (9.2 on a dev machine, same problem), Grails 2.5.0, Hibernate plugin 4.3.8.1, Tomcat 8, Cache plugin 1.1.8, Hibernate Filter plugin 0.3.2 and the Audit Logging plugin 1.0.1 (other stuff too, obviously, but this feels like it could be relevant). My datasource config contains:
hibernate {
cache.use_second_level_cache = true
cache.use_query_cache = false
cache.region.factory_class = 'org.hibernate.cache.ehcache.SingletonEhCacheRegionFactory'
singleSession = true
flush.mode = 'manual'
format_sql = true
}
Grails bug. And a nasty one, everything seems OK until your app starts acting funny in completely unrelated parts of the app.