Is there something out-of-the-box to run a Linq query in background - maybe based on PLINQ? I have tried a few things, but did not find the proper approach.
I know I can create a background worker to do so, but I am looking for something "I can just use" not requiring to write the whole handling on my own.
Overall picture: I try to keep my WinForm application reactive wile reading the data (via LINQ) and avoid a "blocking" when reading larger amount of data.
You could spawn a Task<T>, and have it wrap your PLINQ query.
PLINQ isn't about creating asynchronous operations (what you want), but rather concurrent processing within a single (blocking) operation. Instead, you probably want to do something like:
Task<IEnumerable<YourType>> task = Task.Factory.StartNew(
() =>
{
// Use standard LINQ here...
return myCollection.Where(SomeCriteria);
}
);
// When this is completed, do something with the results
task.ContinueWith( t =>
{
IEnumerable<YourType> results = t.Result;
// Use results here (on UI thread - no invoke required)
}, TaskScheduler.FromCurrentSynchronizationContext());
Related
This may be a question about coroutines in general, but in my ktor server (netty engine, default configuration) application I perform serveral asyncronous calls to a database and api endpoint and want to make sure I am using coroutines efficiently. My question are as follows:
Is there a tool or method to work out if my code is using coroutines effectively, or do I just need to use curl to spam my endpoint and measure the performance of moving processes to another context e.g. compute?
I don't want to start moving tasks/jobs to another context 'just in case' but should I treat the default coroutine context in my Route.route() similar to the Android main thread and perform the minimum amount of work on it?
Here is an rough example of the code that I'm using:
fun Route.route() {
get("/") {
call.respondText(getRemoteText())
}
}
suspend fun getRemoteText() : String? {
return suspendCoroutine { cont ->
val document = 3rdPartyLibrary.get()
if (success) {
cont.resume(data)
} else {
cont.resume(null)
}
}
}
You could use something like Apache Jmeter, but writing a script and spamming your server with curl seems also a good option to me
Coroutines are pretty efficient when it comes to context/thread switching, and with Dispatchers.Default and Dispatchers.IO you'll get a thread-pool. There are a couple of documentations around this, but I think you can definitely leverage these Dispatchers for heavy operations
There are few tools for testing endpoints. Jmeter is good, there are also command line tools like wrk, wrk2 and siege.
Of course context switching costs. The coroutine in routing is safe to run blocking operations unless you have the option shareWorkGroup set. However, usually it's good to use a separate thread pool because you can control it's size (max threads number) to not get you database down.
I want to stream result objects captured by Spring JDBC RowCallbackHandler using via a Kotlin Sequence.
The code looks basically like this:
fun findManyObjects(): Sequence<Thing> = sequence {
val rowHandler = object : RowCallbackHandler {
override fun processRow(resultSet: ResultSet) {
val thing = // create from resultSet
yield(thing) // ERROR! No coroutine scope
}
}
jdbcTemplate.query("select * from ...", rowHandler)
}
But I get the compilation error:
Suspension functions can be called only within coroutine body.
However, exactly this "coroutine body" should exist, because the whole block is wrapped in a sequence builder. But it doesn't seem to work with a nested object.
Minimal example to show that it doesn't compile with a nested object:
// compiles
sequence {
yield(1)
}
// doesn't compile
sequence {
object {
fun doit() {
yield(1) // Suspension functions can be called only within coroutine body.
}
}
}
How can I pass an object from the ResultSet into the Sequence?
Use Flow for asynchronous data streams
The reason you can't call yield inside your RowCallbackHandler object is twofold.
The processRow function isn't a suspending function (and can't be, because it's declared in and called by Java). A suspending function like yield can only be called by another suspending function.
A sequence always ends when the sequence { ... } builder returns. Even if you and I know that the query method will invoke the RowCallbackHandler before returning from the sequence, the Kotlin compiler has no way of knowing that. Yielding sequence values from functions and objects other than the body of the sequence itself is never allowed, because there's no way of knowing where or when they will run.
To solve this problem, we need to introduce a different kind of coroutine: one that can suspend itself while it waits for the RowCallbackHandler to be invoked.
Unfortunately, because we're talking about JDBC here, there may not be much to gain by introducing full-blown coroutines. Under the hood, calls to the database will always be made in a blocking way, removing a lot of the benefit. It might well be simpler not to try and 'stream' results, and just iterate over them in a boring, old-fashioned way. But let's explore the possibilities all the same.
The problem with sequences
Sequences are designed for on-demand computation, and are not asynchronous. They can't wait for other asynchronous operations, such as callbacks. The sequence builder's yield function simply suspends while waiting for the caller to retrieve the next item, and it's the only suspending function a sequence is ever allowed to call. You can demonstrate this if you try to use a simple suspending call like delay inside a sequence. You'll get a compile error letting you know that you're operating in a restricted coroutine scope.
sequence<String> { delay(1000) } // doesn't compile
Without the ability to call suspending functions, there's no way to wait for a callback to be invoked. Recognising this limitation, Kotlin provides an alternative mechanism for streams of on-demand values that do provide data in an asynchronous way. It's called a Flow.
Callback flows
The mechanism for using Flows to provide values from a callback interface is described very nicely by Roman Elizarov in his Medium article Callbacks and Kotlin Flows.
If you did want to use a callback flow, you'd simply replace sequence with callbackFlow, and replace yield with sendBlocking.
Your code might look something like this:
fun findManyObjects(): Flow<Thing> = callbackFlow {
val rowHandler = object : RowCallbackHandler {
override fun processRow(resultSet: ResultSet) {
val thing = // create from resultSet
sendBlocking(thing)
}
}
jdbcTemplate.query("select * from ...", rowHandler)
close() // the query is finished, so there are no more rows
}
A simpler flow
While that's the idiomatic way to stream values provided by a callback, it might not be the simplest approach to this problem. By avoiding callbacks altogether, you can use the much more common flow builder, passing each value to its emit function. But now that you have asynchrony in the form of coroutines, you can't just return a flow and then allow Spring to immediately close the result set. You need to be able to delay the closing of the result set until the flow has actually been consumed. That means peeling back the abstractions provided by RowCallbackHandler or ResultSetExtractor, which expect to process all the results in a blocking way, and instead providing your own implementation.
fun Connection.findManyObjects(): Flow<Thing> = flow {
prepareStatement("select * from ...").use { statement ->
statement.executeQuery().use { resultSet ->
while (resultSet.next()) {
val thing = // create from resultSet
emit(thing)
}
}
}
}
Note the use blocks, which will deal with closing the statement and result set. Because we don't reach the end of the use blocks until the while loop has completed and all the values have been emitted, the flow is free to suspend while the result set remains open.
So why use a flow at all?
You might notice that if you do it this way, you can actually replace flow and emit with sequence and yield. So have we come full circle? Well, sort of. The difference is that a flow can only be consumed from a coroutine, whereas with sequence, you can iterate over the resulting values without suspending at all. In this particular case, it's a hard call to make, because JDBC operations are always blocking.
If you use a sequence, the calling thread will block as it waits to receive the data. Values in a sequence are always computed by the thing consuming the sequence, so if the sequence invokes a blocking function, the consumer's thread will block waiting for the value. In a non-coroutine application, that might be okay, but if you're using coroutines, you really want to avoid hiding blocking calls inside innocuous-looking sequences.
If you use a flow, you can at least isolate the blocking calls by having the flow run on a particular dispatcher. For example, you could use the built-in IO dispatcher to perform the JDBC call, then switch back to the default dispatcher for any further processing. If you definitely want to stream values, I think this is a better approach than using a sequence.
With all this in mind, you'll need to be careful with your use of coroutines and dispatchers if you do choose one of these solutions. If you'd rather not worry about that, there's nothing wrong with using a regular ResultSetExtractor and forgetting about both sequences and flows for now.
I have more of a opinions question, asi if this, what many people do, should be a Rx use case.
In apps there is usually sql database, which is queried by UI as a observable, which emits after the query is loaded + anytime data changes (Room / SqlDelight etc)
Reads sound okay, however, is it possible to have "pure" writes to the database?
Writing to the database might look like this
fun sync() = Completable.fromCallable {
// do something
database.writeSomethingSynchronously()
}
SomeUi {
init {
database.someQueryObservable()
.subscribe { show list }
}
}
Imagine you want to display progressbar while this Completable is in flight.
What is effectively happening here is sideffecting to the database. Which means the opened database observable will re-emit when the data is written, but still before the sync() returns (assuming single threaded for simplicity)
Now there is point in time where there is new data in the UI and the progressbar is shown. (and worse with multithreading timings) This is invalid state.
In imperative world, sync would provide a completion callback, in which one would reload the query manually + show/hide progressbar synchronously. (And somehow block the database change listener for duration of the sync writes?)
Is there a way around this at all?
Previously I had used SQLITE-NET library for my all sqlite database tasks and it works well.
But my app has huge number of data to insert and it took a lot of time. So I decided to use SQLITE-WinRT wrapper only where bulk insert is needed as SQLITE-WinRT wrapper seems to provide feature like preparing statements then binding data and then execute them which gives faster processing and increases performance.
In my app, there are lots of CRUD operations that uses SQLITE-NET methods and I left as it is since it is hard to completely switch from SQLITE-NET library to SQLITE-WinRT wrapper.
My app has background task that runs and processes some web-service calls and lot of CRUD operations using only SQLITE-NET library.
Whenever I tried to bulk insert using SQLITE-WinRT wrapper using prepared statements, in case background task is running, it always throws Busy exception in SQLITE-NET library. I know its reason, background service does lot of CRUD operations using SLITE-NET library. So while bulk inserting using SQLITE-WinRT wrapper it throws Busy exception as the sqlite database is already doing lot of tasks in background using SQLITE-NET.
So, my question is how to handle this situation. Please suggest me some ideas to handle such cases. I thought of two ideas:
Stopping background service while bulk inserting (In background,
there is series of long tasks like calling web-service and doing work
with SQLite db, stopping background service at once might not be
good idea )
Closing all SQLITE-NET connection (didn't work as expected though)
Any help would be appreciated. Thanks in advance.
While bulk inserting, I started like this:
string dbPath = "collection.sqlite";
var file = await ApplicationData.Current.LocalFolder.GetFileAsync(dbPath);
var db = new SQLiteWinRT.Database(file);
await db.OpenAsync(SqliteOpenMode.OpenReadWrite);
using (var statement = await db.PrepareStatementAsync("INSERT INTO Forms(ServerFormId,FormFileName,FormStatusId,PriorityId) VALUES(?,?,?,?)"))
{
await db.ExecuteStatementAsync("BEGIN TRANSACTION");
statement.Reset();
statement.BindTextParameterAt(1, "0");
statement.BindTextParameterAt(2, formName);
statement.BindTextParameterAt(3, formStatusId);
statement.BindTextParameterAt(4, priorityId);
await statement.StepAsync().AsTask().ConfigureAwait(false);
}
await db.ExecuteStatementAsync("COMMIT TRANSACTION");
SQLite-WinRT: https://blogs.msdn.microsoft.com/andy_wigley/2013/11/21/how-to-massively-improve-sqlite-performance-using-sqlwinrt/
SQLite-net: http://www.codeproject.com/Articles/826602/Using-SQLite-as-local-database-with-Universal-Apps
I'm afraid that the only option is using lock or semaphore before accessing the database.
The lock mechanism guarantees that only one thread does inner code block. Other threads synchronously waits.
readonly object sync = new object();
void MyMethod() {
lock (sync) {
...
}
}
Semaphore is similar, but the inner code block can be executed maximally by n threads.
Please see more info about SemaphoreSlim on MSDN.
I have a concurrent collection that contains 100K items. The processing of each item in the collection can take as little as 100ms or as long as 10 seconds. I want to speed things up by parallelizing the processing, and have a 100 minions doing the work simultaneously. I also have to report some specific data to the UI as this processing occurs, not simply a percentage complete.
I want the parallelized sub-tasks to nibble away at the concurrent collection like a school of minnows attacking a piece of bread tossed into a pond. How do I expose the concurrent collection to the parallelized tasks? Can I have a normal loop and simply launch an async task inside the loop and pass it an IProgress? Do I even need the concurrent collection for this?
It has been recommended to me that I use Parallel.ForEach but I don't see how each sub-process established by the degrees of parallelism could report a custom object back to the UI with each item it processes, not only after it has finished processing its share of the 100K items.
The framework already provides the IProgress inteface for this purpose, and an implementation in Progress. To report progress, call IProgress.Report with a progressvalue. The value T can be any type, not just a number.
Each IProgress implementation can work in its own way. Progress raises an event and calls a callback you pass to it when you create it.
Additionally, Progress.Report executes asynchronously. Under the covers, it uses SychronizationContext.Post to execute its callback and all event handlers on the thread that created the Progress instance.
Assuming you create a progress value class like this:
class ProgressValue
{
public long Step{get;set;}
public string Message {get;set;}
}
You could write something like this:
IProgress<ProgressValue> myProgress=new Progress<ProgressValue>(p=>
{
myProgressBar.Value=p.Step;
});
IList<int> myVeryLargeList=...;
Parallel.ForEach(myVeryLargeList,item,state,step=>
{
//Do some heavy work
myProgress.Report(new ProgressValue
{
Step=step,
Message=String.Format("Processed step {0}",step);
});
});
EDIT
Oops! Progress implements IProgress explicitly. You have to cast it to IProgress , as #Tim noticed.
Fixed the code to explicitly declare myProgress as an IProgress.