Using operation queues with combine framework [closed] - nsoperationqueue

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
With the arrival of combine framework, is there a need to use operation queues anymore. For example, apple uses operation queues almost all over the place in WWDC app. So if we use SwiftUI with combine(asynchronous programming), will there be a need to use Operation Queues?

Combine is just another asynchronous pattern, but doesn’t supplant operation queues (or dispatch queues). Just as GCD and operation queues happily coexist in our code bases, the same is true with Combine.
GCD is great at easy-to-write, yet still highly performant, code to dispatching tasks to various queues. So if you have something that might risk blocking the main thread, GCD makes it really easy to dispatch that to a background thread, and then dispatch some completion block back to the main thread. It also handles timers on background threads, data synchronization, highly-optimized parallelized code, etc.
Operation queues are great for higher-level tasks (especially those that are, themselves, asynchronous). You can take these pieces of work, wrap them up in discrete objects (for nice separation of responsibilities) and the operation queues manage execution, cancelation, and constrained concurrency, quite elegantly.
Combine shines at writing concise, declarative, composable, asynchronous event handling code. It excels at writing code that outlines how, for example, one’s UI should reflect some event (network task, notification, even UI updates).
This is obviously an oversimplification, but those are a few of the strengths of the various frameworks. And there is definitely overlap in these three frameworks, for sure, but each has its place.

Related

How is the impl. of Rust's channels functionally different from Go's impl. of channels? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I specifically intend to use the channel functionality of either language in developing a scalable web service. I am unclear at present about which one would be easier to implement but also which one would better fit the intended design, help maintain uptime, require minimal overhead etc.
I understand that the Go implementation uses CSP methodology, though I'm unclear exactly what the Rust implementation is based on and whether it is even analogous to the Go version.
Is there any similarity or are they too different to compare to each other?
Are there use-cases where both implementations would operate mostly the same?
There is no such thing as the Rust channel.
Whereas in Go channels are a language concept provided by the Go run-time, in Rust channels can be implemented in a library, and therefore there are as many channels implementations as there are libraries, each with different goals and trade-offs:
There is one MPSC (Multi-Producer, Single-Consumer) channel in the standard library.
There are MPMC (Multi-Producer, Multi-Consumer) channels in the crossbeam ecosystem and in the async-std crate1.
All of those implementations offer different interfaces, capabilities, and performance trade-offs.
1 Not an official crate, simply a port of std functionalities to async.

How is distributed memory parallelism handled in Rust? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
How is distributed memory parallelism handled in Rust? By that, I mean language constructs, libraries, or other features to handle computing on something like a cluster akin to what MPI provides C, but not necessarily using the same primitives or methodology. In the Rustonomicon, I see a discussion of threads and concurrency, but I don't see any discussion on parallelizing across multiple computers.
To the best of my knowledge, there isn't really anything built into the language for distributed computing (which is understandable, since that's arguably not really the language's major focus, or at least wasn't back in the day). I don't believe there's any particularly popular crate or another for distributed computing either. Actix is probably the only actor crate that has achieved any traction, and it supports HTTP, but I don't think it is targeted at HPC/supercomputer setups. You also definitely would want to check out Tokio, which seems to be pretty much the library for asynchronous programming in Rust, and is specifically targeted towards network IO operations.
At the present point in time, if you're looking to replicate MPI, my guess would be that your best bet is to use FFI to a C-based MPI library. It appears that there's been a handful of attempts to create bindings to MPI for Rust, but I'm not sure that any of them are particularly complete.

Key reasons for golang to make concurrency easier [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Is my understanding roughly right below?
go can mostly detect dead lock at compile-time.
That go can use chan to minimize race condition is because only single sender or receiver goroutine can have access to any specific chan at a time.
I wouldn't say that's accurate. On the first point there aren't any compile time guarantees about dead locking, if you use a mutex poorly you will be dead locking, no compiler can prevent that. You can test for race conditions easily, but that's different.
On the second point, the channel serializes your asynchronous operations but I don't think how you state it makes much sense. A bunch of goroutines can be writing to and reading from it. It's just like a queue to put the data in, no coordination is guaranteed. You won't panic due to multiple routines reading off it or writing to it at the same time but if you have that happening Go isn't doing anything to make your program work well, you have to coordinate the routines yourself using channels.
No, the first is completely wrong and the second is at least stated unclear or strange.
According to this tutorial, it can catch some deadlocks. I've not gone though this tutorial though...
http://guzalexander.com/2013/12/06/golang-channels-tutorial.htmlenter link description here

Difference between SparkStreaming and Storm [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am doing some analytics on live twitter streaming data.I heard about Spark Streaming.I want to know about which is best for analytics on live streaming data as my data is coming very fast from source.
I recommand this presentation about the subject:
http://fr.slideshare.net/ptgoetz/apache-storm-vs-spark-streaming
In fact, apache storm is a true streaming architecture, with events managed one by one, if you want to group them, you have to design a topology for this purpose. It is the most powerful in terms of latency and design. But it is of course complex, and you have to design correctly what you want.
On the other hand, apache spark is a micro-batching architecture, it is like hadoop but executed every x seconds producing micro-batches of data on a defined time window. As it does look like a batching solution, it seems simpler and can be enough if you don't want a latency < few seconds.
Apart from really nice presentation linked by #zenbeni I would like to add a few specific points based on first hand experience with both Storm and Spark streaming especially about your use case (Twitter Data).
Twitter itself uses Storm for many parts of their realtime stream processing pipeline. So if the type of processing you want to do is similar, Storm is a good choice.
Storm's multi language support is great. But it is hard to pass around errors. For example, if you are calling Python code from a Java bolt and an exception happens in your Python bolt, it's not easy to propagate this exception back to Java code.
If your analysis is based on a single Tweet only, Storm will likely be better. However, if you need to do some aggregate or iterative analytics, you will have to microbatch in Storm as well. This essentially means you have to store state in bunch of your bolts.
Finally, often one needs to do both stream as well as batch processing. Spark shines when you need to mix stream processing along with batch, interactive and iterative processing. In fact, it's not clear to me how you should do iterative processing Storm.

Blocking IO vs non-blocking IO; looking for good articles [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
Once upon a time I bumped into Introduction to Indy article and can't stop thinking about blocking vs non-blocking IO ever since then.
Looking for some good articles describing what are pros and cons of blocking IO and non-blocking IO and how to design your application in each case to get natural, easy to understand and easy to maintain code.
Would like to understand BIG picture...
Well blocking IO means that a given thread cannot do anything more until the IO is fully received (in the case of sockets this wait could be a long time).
Non-blocking IO means an IO request is queued straight away and the function returns. The actual IO is then processed at some later point by the kernel.
For blocking IO you either need to accept that you are going to wait for every IO request or you are going to need to fire off a thread per request (Which will get very complicated very quickly).
For non-blocking IO you can send off multiple requests but you need to bear in mind that the data will not be available until some "later" point. This checking that the data has actually arrived is probably the most complicated part.
In 99% of applications you will not need to care that your IO blocks. Sometimes however you need the extra performance of allowing yourself to initiate an IO request and then do something else before coming back and, hopefully, finding that the IO request has completed.
Anyway, just my tuppence.
Edit: To answer how to design an application for handling blocking IO while have good performance, coroutines could be a good fit.
The positives and negatives are pretty clear cut:
Blocking - Linear programming, easier to code, less control.
Non-blocking - Parallel programming, more difficult to code, more control.

Resources