Is it possible to circumvent the observable delegate in certain cases?
Use case:
val ls: ArrayList<SomeType> by Delegates.observable(arrayListOf()) {
_, _, new ->
if (someCondition) {
usesList(new)
// I want to reset ls to arrayListOf(), but without the invocation of the observable delegate.
}
}
Sure... You have to have kotlin-reflect in your classpath though. And I would not recommend to use this anyway. :)
val delegatedProperties = D::class.java.getDeclaredField("\$\$delegatedProperties").get(d) as Array<KProperty<*>>
(delegatedProperties.find { it.name == "ls" } as KMutableProperty<*>).setter.call(arrayListOf<Any>())
Related
I have this function:
struct Downloader {
static func presentDownloader(
in viewController: UIViewController,
with urls: [URL],
_ completion: #escaping (Bool) -> Void
) {
DispatchQueue.main.async {
let activityViewController = UIActivityViewController(
activityItems: urls,
applicationActivities: nil
)
activityViewController.completionWithItemsHandler = { _, result, _, _ in
completion(result)
}
viewController.present(
activityViewController,
animated: true,
completion: nil
)
}
}
}
It simply creates a UIActivityViewController and passes the completionWithItemsHandler as a completion block into the static func.
I am now trying to get rid of all the #escaping/closures as much as I can in order to adopt the new async/await syntax, however I don't know if I can do something here with this.
I started adding an async to my function, and realized that Xcode shows completionWithItemsHandler with an async keyword, but I really have no idea if I can achieve what I want here.
Thank you for your help
Yes, you can do that but you need to write yourself little wrapper over it. So it will look something like that:
#MainActor
func askToShareFile(url: URL) async {
let avc = UIActivityViewController(activityItems: [url], applicationActivities: nil)
present(avc, animated: true)
return await withCheckedContinuation { continuation in
avc.completionWithItemsHandler = { activity, completed, returnedItems, activityError in
print("Activity completed: \(completed), selected action = \(activity), items: \(returnedItems) error: \(activityError)")
if completed {
continuation.resume()
} else {
if activity == nil {
// user cancelled share sheet by closing it
continuation.resume()
}
}
}
}
}
However, important note here that, from my experimentation as of today (iOS 15.5), I can see completion handler is NOT called properly when, on share sheet, user selects any app that handle our file by copying it (activityType = com.apple.UIKit.activity.RemoteOpenInApplication-ByCopy).
If you do some changes - be careful, as you might loose continuation here so please test it yourself too.
im breaking my mind around how to do this in RX.
The actual usecase is mapping of LowerLevelEvent(val userId: String) to HigherLevelEvent(val user: User), where the User is provided by observable, so it can emit n times, so example output
LowerLevelEvent1(abc) -> HigherLevelEvent1(userAbc(nameVariation1)
LowerLevelEvent2(abc) -> HigherLevelEvent2(userAbc(nameVariation1)
LowerLevelEvent3(abc) -> HigherLevelEvent3(userAbc(nameVariation1)
LowerLevelEvent4(abc) -> HigherLevelEvent4(userAbc(nameVariation1)
HigherLevelEvent4(userAbc(nameVariation2)
HigherLevelEvent4(userAbc(nameVariation3)
So my naive solution was to use combineLatest. So while userId is not changed user observable is subscribed, i.e. not resubscribed when new lowerLevelEmits & its userId is not changed
val _lowerLevelEventObservable: Observable<LowerLevelEvent> = lowerLevelEventObservable
.replayingShare()
val _higherLevelEventObservable: Observable<HigherLevelEvent> = Observables
.combineLatest(
_lowerLevelEventObservable,
_lowerLevelEventObservable
.map { it.userId }
.distinctUntilChanged()
.switchMap { userRepository.findByIdObservable(it)
) { lowerLevelEvent, user -> createHigherLevelInstance... }
However this has glitch issues, since both sources in combineLatest originate from same observable.
Then I thought about
lowerLevelObservable.
.switchMap { lowerLevelEvent ->
userRepository.findByIdObservable(lowerLevelEvent.userId)
.map { user -> createHigherLevelInstance... }
}
This however can break if lowerLevelObservable emits fast, and since user observable can take some time, given lowerLevelX event can be skipped, which I cannot have. Also it resubscribes user observable each emit, which is wasteful since it wont change most likely
So, maybe concatMap? That has issue of that the user observable doesnt complete, so concatMap wouldnt work.
Anyone have a clue?
Thanks a lot
// Clarification:
basically its mapping of A variants (A1, A2..) to A' variants (A1', A2'..) while attaching a queried object to it, where the query is observable so it might reemit after the mapping was made, so AX' needs to be reemited with new query result. But the query is cold and doesnt complete
So example A1(1) -> A1'(user1), A2(1) -> A2'(user1), A3(1) -> A3'(user1) -- now somebody changes user1 somewhere else in the app, so next emit is A3'(user1')
Based on the comments you have made, the below would work in RxSwift. I have no idea how to translate it to RxJava. Honestly though, I think there is a fundamental misuse of Rx here. Good luck.
How it works: If it's allowed to subscribe it will, otherwise it will add the event to a buffer for later use. It is allowed to subscribe if it currently isn't subscribed to an inner event, or if the inner Observable it's currently subscribed to has emitted an element.
WARNING: It doesn't handle completions properly as it stands. I'll leave that to you as an exercise.
func example(lowerLevelEventObservable: Observable<LowerLevelEvent>, userRepository: UserRepository) {
let higherLevelEventObservable = lowerLevelEventObservable
.flatMapAtLeastOnce { event in // RxSwift's switchLatest I think.
Observable.combineLatest(
Observable.just(event),
userRepository.findByIdObservable(event.userId),
resultSelector: { (lowLevelEvent: $0, user: $1) }
)
}
.map { createHigherLevelInstance($0.lowLevelEvent, $0.user) }
// use higherLevelEventObservable
}
extension ObservableType {
func flatMapAtLeastOnce<U>(from fn: #escaping (E) -> Observable<U>) -> Observable<U> {
return Observable.create { observer in
let disposables = CompositeDisposable()
var nexts: [E] = []
var disposeKey: CompositeDisposable.DisposeKey?
var isAllowedToSubscribe = true
let lock = NSRecursiveLock()
func nextSubscription() {
isAllowedToSubscribe = true
if !nexts.isEmpty {
let e = nexts[0]
nexts.remove(at: 0)
subscribeToInner(e)
}
}
func subscribeToInner(_ element: E) {
isAllowedToSubscribe = false
if let key = disposeKey {
disposables.remove(for: key)
}
let disposable = fn(element).subscribe { innerEvent in
lock.lock(); defer { lock.unlock() }
switch innerEvent {
case .next:
observer.on(innerEvent)
nextSubscription()
case .error:
observer.on(innerEvent)
case .completed:
nextSubscription()
}
}
disposeKey = disposables.insert(disposable)
}
let disposable = self.subscribe { event in
lock.lock(); defer { lock.unlock() }
switch event {
case let .next(element):
if isAllowedToSubscribe == true {
subscribeToInner(element)
}
else {
nexts.append(element)
}
case let .error(error):
observer.onError(error)
case .completed:
observer.onCompleted()
}
}
_ = disposables.insert(disposable)
return disposables
}
}
}
This is cobbled together to illustrate the problem that I have with the switch function. I do not have problem printing "Left" "Right" endlessly.
The point of switch is to swap the value of enum to another. This solution doesn't work because presumably the switch moves t into itself so it's no longer usable. Use of mutable references causes a whole host of other problems, like with lifetime and mismatched types. The documentation had instructions how to do this with structs, but not enums. The compiler suggested implementing Copy and Clone to the enum, but that did nothing useful.
How is this type of method supposed to be made in Rust?
fn main() {
let mut t = Dir::Left;
loop {
match &t {
&Dir::Left => println!("Left"),
&Dir::Right => println!("Right"),
}
t.switch();
}
}
enum Dir {
Left,
Right,
}
impl Dir {
//this function is the problem here
fn switch(mut self) {
match self {
Dir::Left => self = Dir::Right,
Dir::Right => self = Dir::Left,
};
}
}
Of course I could just make it so
t = t.switch();
and
fn switch(mut self) -> Self {
match self {
Dir::Left => return Dir::Right,
Dir::Right => return Dir::Left,
};
}
But I feel that would be comparatively clumsy solution, and I would like to avoid it if at all possible.
Your method consumes your data instead of borrowing it. If you borrow it, it works fine:
impl Dir {
fn switch(&mut self) {
*self = match *self {
Dir::Left => Dir::Right,
Dir::Right => Dir::Left,
};
}
}
In Scala, one can easily do a parallel map, forEach, etc, with:
collection.par.map(..)
Is there an equivalent in Kotlin?
The Kotlin standard library has no support for parallel operations. However, since Kotlin uses the standard Java collection classes, you can use the Java 8 stream API to perform parallel operations on Kotlin collections as well.
e.g.
myCollection.parallelStream()
.map { ... }
.filter { ... }
As of Kotlin 1.1, parallel operations can also be expressed quite elegantly in terms of coroutines. Here is a custom pmap helper function for lists:
fun <A, B>List<A>.pmap(f: suspend (A) -> B): List<B> = runBlocking {
map { async(Dispatchers.Default) { f(it) } }.map { it.await() }
}
You can use this extension method:
suspend fun <A, B> Iterable<A>.pmap(f: suspend (A) -> B): List<B> = coroutineScope {
map { async { f(it) } }.awaitAll()
}
See Parallel Map in Kotlin for more info
There is no official support in Kotlin's stdlib yet, but you could define an extension function to mimic par.map:
fun <T, R> Iterable<T>.pmap(
numThreads: Int = Runtime.getRuntime().availableProcessors() - 2,
exec: ExecutorService = Executors.newFixedThreadPool(numThreads),
transform: (T) -> R): List<R> {
// default size is just an inlined version of kotlin.collections.collectionSizeOrDefault
val defaultSize = if (this is Collection<*>) this.size else 10
val destination = Collections.synchronizedList(ArrayList<R>(defaultSize))
for (item in this) {
exec.submit { destination.add(transform(item)) }
}
exec.shutdown()
exec.awaitTermination(1, TimeUnit.DAYS)
return ArrayList<R>(destination)
}
(github source)
Here's a simple usage example
val result = listOf("foo", "bar").pmap { it+"!" }.filter { it.contains("bar") }
If needed it allows to tweak threading by providing the number of threads or even a specific java.util.concurrent.Executor. E.g.
listOf("foo", "bar").pmap(4, transform = { it + "!" })
Please note, that this approach just allows to parallelize the map operation and does not affect any downstream bits. E.g. the filter in the first example would run single-threaded. However, in many cases just the data transformation (ie. map) requires parallelization. Furthermore, it would be straightforward to extend the approach from above to other elements of Kotlin collection API.
From 1.2 version, kotlin added a stream feature which is compliant with JRE8
So, iterating over a list asynchronously could be done like bellow:
fun main(args: Array<String>) {
val c = listOf("toto", "tata", "tutu")
c.parallelStream().forEach { println(it) }
}
Kotlin wants to be idiomatic but not too much synthetic to be hard to understand at a first glance.
Parallel computation trough Coroutines is no exception. They want it to be easy but not implicit with some pre-built method, allowing to branch the computation when needed.
In your case:
collection.map {
async{ produceWith(it) }
}
.forEach {
consume(it.await())
}
Notice that to call async and await you need to be inside a so called Context, you cannot make suspending calls or launching a coroutine from a non-coroutine context. To enter one you can either:
runBlocking { /* your code here */ }: it will suspend the current thread until the lambda returns.
GlobalScope.launch { }: it will execute the lambda in parallel; if your main finishes executing while your coroutines have not bad things will happen, in that case better use runBlocking.
Hope it may helps :)
At the present moment no. The official Kotlin comparison to Scala mentions:
Things that may be added to Kotlin later:
Parallel collections
This solution assumes that your project is using coroutines:
implementation( "org.jetbrains.kotlinx:kotlinx-coroutines-core:1.3.2")
The functions called parallelTransform don't retain the order of elements and return a Flow<R>, while the function parallelMap retains the order and returns a List<R>.
Create a threadpool for multiple invocations:
val numberOfCores = Runtime.getRuntime().availableProcessors()
val executorDispatcher: ExecutorCoroutineDispatcher =
Executors.newFixedThreadPool(numberOfCores ).asCoroutineDispatcher()
use that dispatcher (and call close() when it's no longer needed):
inline fun <T, R> Iterable<T>.parallelTransform(
dispatcher: ExecutorDispatcher,
crossinline transform: (T) -> R
): Flow<R> = channelFlow {
val items: Iterable<T> = this#parallelTransform
val channelFlowScope: ProducerScope<R> = this#channelFlow
launch(dispatcher) {
items.forEach {item ->
launch {
channelFlowScope.send(transform(item))
}
}
}
}
If threadpool reuse is of no concern (threadpools aren't cheap), you can use this version:
inline fun <T, R> Iterable<T>.parallelTransform(
numberOfThreads: Int,
crossinline transform: (T) -> R
): Flow<R> = channelFlow {
val items: Iterable<T> = this#parallelTransform
val channelFlowScope: ProducerScope<R> = this#channelFlow
Executors.newFixedThreadPool(numberOfThreads).asCoroutineDispatcher().use { dispatcher ->
launch( dispatcher ) {
items.forEach { item ->
launch {
channelFlowScope.send(transform(item))
}
}
}
}
}
if you need a version that retains the order of elements:
inline fun <T, R> Iterable<T>.parallelMap(
dispatcher: ExecutorDispatcher,
crossinline transform: (T) -> R
): List<R> = runBlocking {
val items: Iterable<T> = this#parallelMap
val result = ConcurrentSkipListMap<Int, R>()
launch(dispatcher) {
items.withIndex().forEach {(index, item) ->
launch {
result[index] = transform(item)
}
}
}
// ConcurrentSkipListMap is a SortedMap
// so the values will be in the right order
result.values.toList()
}
I found this:
implementation 'com.github.cvb941:kotlin-parallel-operations:1.3'
details:
https://github.com/cvb941/kotlin-parallel-operations
I've come up with a couple of extension functions:
The suspend extension function on Iterable<T> type, which does a parallel processing of items and returns some result of processing each item. By default it uses Dispatchers.IO dispatcher to offload blocking tasks to a shared pool of threads. Must be called from a coroutine (including a coroutine with Dispatchers.Main dispatcher) or another suspend function.
suspend fun <T, R> Iterable<T>.processInParallel(
dispatcher: CoroutineDispatcher = Dispatchers.IO,
processBlock: suspend (v: T) -> R,
): List<R> = coroutineScope { // or supervisorScope
map {
async(dispatcher) { processBlock(it) }
}.awaitAll()
}
Example of calling from a coroutine:
val collection = listOf("A", "B", "C", "D", "E")
someCoroutineScope.launch {
val results = collection.processInParallel {
process(it)
}
// use processing results
}
where someCoroutineScope is an instance of CoroutineScope.
Launch and forget extension function on CoroutineScope, which doesn't return any result. It also uses Dispatchers.IO dispatcher by default. Can be called using CoroutineScope or from another coroutine.
fun <T> CoroutineScope.processInParallelAndForget(
iterable: Iterable<T>,
dispatcher: CoroutineDispatcher = Dispatchers.IO,
processBlock: suspend (v: T) -> Unit
) = iterable.forEach {
launch(dispatcher) { processBlock(it) }
}
Example of calling:
someoroutineScope.processInParallelAndForget(collection) {
process(it)
}
// OR from another coroutine:
someCoroutineScope.launch {
processInParallelAndForget(collection) {
process(it)
}
}
2a. Launch and forget extension function on Iterable<T>. It's almost the same as previous, but the extension type is different. CoroutineScope must be passed as argument to the function.
fun <T> Iterable<T>.processInParallelAndForget(
scope: CoroutineScope,
dispatcher: CoroutineDispatcher = Dispatchers.IO,
processBlock: suspend (v: T) -> Unit
) = forEach {
scope.launch(dispatcher) { processBlock(it) }
}
Calling:
collection.processInParallelAndForget(someCoroutineScope) {
process(it)
}
// OR from another coroutine:
someScope.launch {
collection.processInParallelAndForget(this) {
process(it)
}
}
You can mimic the Scala API by using extension properties and inline classes. Using the coroutine solution from #Sharon answer, you can write it like this
val <A> Iterable<A>.par get() = ParallelizedIterable(this)
#JvmInline
value class ParallelizedIterable<A>(val iter: Iterable<A>) {
suspend fun <B> map(f: suspend (A) -> B): List<B> = coroutineScope {
iter.map { async { f(it) } }.awaitAll()
}
}
with this, now your code can change from
anIterable.map { it.value }
to
anIterable.par.map { it.value }
also you can change the entry point as you like other than using extension properties, e.g.
fun <A> Iterable<A>.parallel() = ParallelizedIterable(this)
anIterable.parallel().map { it.value }
You can also use another parallel solution and implement the rest of iterable methods inside ParallelizedIterable while still having the same method names for the operations
The drawback is that this implementation can only parallelize one operation after it, to make it so that it parallelize every subsequent operation, you may need to modify ParallelizedIterable further so it return its own type instead of returning back to List<A>
I'm using promisekit 3.0 to help chain alamofire callbacks in a clean way. The objective is to start with a network call, with a promise to return an array of urls.
Then, I'm looking to execute network calls on as many of those urls as needed to find the next link i'm looking for. As soon as this link is found, I can pass it to the next step.
This part is where I'm stuck.
I can pick an arbitrary index in the array that I know has what I want, but I can't figure out the looping to keep it going until the right information is returned.
I tried learning from this obj-c example, but i couldn't get it working in swift.
https://stackoverflow.com/a/30693077/1079379
He's a more tangible example of what i've done.
Network.sharedInstance.makeFirstPromise(.GET, url: NSURL(string: fullSourceLink)! )
.then { (idArray) -> Promise<AnyObject> in
let ids = idArray as! [String]
//how do i do that in swift? (from the example SO answer)
//PMKPromise *p = [PMKPromise promiseWithValue: nil]; // create empty promise
//only thing i could do was feed it the first value
var p:Promise<AnyObject> = Network.sharedInstance.makePromiseRequestHostLink(.POST, id: ids[0])
//var to hold my eventual promise value, doesn't really work unless i set it to something first
var goodValue:Promise<AnyObject>
for item in ids {
//use continue to offset the promise from before the loop started
continue
//hard part
p = p.then{ returnValue -> Promise<AnyObject> in
//need a way to check if what i get is what i wanted then we can break the loop and move on
if returnValue = "whatIwant" {
goodvalue = returnValue
break
//or else we try again with the next on the list
}else {
return Network.sharedInstance.makeLoopingPromise(.POST, id: item)
}
}
}
return goodValue
}.then { (finalLink) -> Void in
//do stuck with finalLink
}
Can someone show me how to structure this properly, please?
Is nesting promises like that anti-pattern to avoid? In that case, what is the best approach.
I have finally figured this out with a combination of your post and the link you posted. It works, but I'll be glad if anyone has input on a proper solution.
func download(arrayOfObjects: [Object]) -> Promise<AnyObject> {
// This stopped the compiler from complaining
var promise : Promise<AnyObject> = Promise<AnyObject>("emptyPromise")
for object in arrayOfObjects {
promise = promise.then { _ in
return Promise { fulfill, reject in
Service.getData(stuff: object.stuff completion: { success, data in
if success {
print("Got the data")
}
fulfill(successful)
})
}
}
}
return promise
}
The only thing I'm not doing is showing in this example is retaining the received data, but I'm assuming you can do that with the results array you have now.
The key to figuring out my particular issue was using the "when" function. It keeps going until all the calls you inputted are finished. The map makes it easier to look at (and think about in my head)
}.then { (idArray) -> Void in
when(idArray.map({Network.sharedInstance.makePromiseRequest(.POST, params: ["thing":$0])})).then{ link -> Promise<String> in
return Promise { fulfill, reject in
let stringLink:[String] = link as! [String]
for entry in stringLink {
if entry != "" {
fulfill(entry)
break
}
}
}
}.then {
}
}