How is the impl. of Rust's channels functionally different from Go's impl. of channels? [closed] - go

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I specifically intend to use the channel functionality of either language in developing a scalable web service. I am unclear at present about which one would be easier to implement but also which one would better fit the intended design, help maintain uptime, require minimal overhead etc.
I understand that the Go implementation uses CSP methodology, though I'm unclear exactly what the Rust implementation is based on and whether it is even analogous to the Go version.
Is there any similarity or are they too different to compare to each other?
Are there use-cases where both implementations would operate mostly the same?

There is no such thing as the Rust channel.
Whereas in Go channels are a language concept provided by the Go run-time, in Rust channels can be implemented in a library, and therefore there are as many channels implementations as there are libraries, each with different goals and trade-offs:
There is one MPSC (Multi-Producer, Single-Consumer) channel in the standard library.
There are MPMC (Multi-Producer, Multi-Consumer) channels in the crossbeam ecosystem and in the async-std crate1.
All of those implementations offer different interfaces, capabilities, and performance trade-offs.
1 Not an official crate, simply a port of std functionalities to async.

Related

How is distributed memory parallelism handled in Rust? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
How is distributed memory parallelism handled in Rust? By that, I mean language constructs, libraries, or other features to handle computing on something like a cluster akin to what MPI provides C, but not necessarily using the same primitives or methodology. In the Rustonomicon, I see a discussion of threads and concurrency, but I don't see any discussion on parallelizing across multiple computers.
To the best of my knowledge, there isn't really anything built into the language for distributed computing (which is understandable, since that's arguably not really the language's major focus, or at least wasn't back in the day). I don't believe there's any particularly popular crate or another for distributed computing either. Actix is probably the only actor crate that has achieved any traction, and it supports HTTP, but I don't think it is targeted at HPC/supercomputer setups. You also definitely would want to check out Tokio, which seems to be pretty much the library for asynchronous programming in Rust, and is specifically targeted towards network IO operations.
At the present point in time, if you're looking to replicate MPI, my guess would be that your best bet is to use FFI to a C-based MPI library. It appears that there's been a handful of attempts to create bindings to MPI for Rust, but I'm not sure that any of them are particularly complete.

Parallel computing: from theory to practice [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I studied how to optimize algorithms for multiprocessor systems. Now I would understand in main lines how these algorithms can be transformed into code.
I know that exist some libraries MPI based that helps the developement of software portable to different type of systems, but is right the word "portable" that makes me confused: how the program can be authomatically adapted to an arbitrary number of processors at runtime, since this is an option of mpirun? How the software can decide the proper topology (mesh, hypercube, tree, ring, etc)? The programmer can specify the preferred topology through MPI?
you start the application with a fixed number of cores. Thus, you cannot automatically adapted to an arbitrary number of processors at runtime.
You can tune your software to the topology of your cluster. This is really advanced and for sure not portable. It only makes sense if you have a fixed cluster and are striving for the last bit of performance.
Best regards, Georg

go-lang: lack of contains method design-justification [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
while browsing for a contains method, I came across the following Q&A
contains-method-for-a-slice
It is said time and again in this Q&A that the method is really trivial to implement. What I don't understand is, if it were so easy to implement, and seeing how DRY is a popular software principle && and most modern languages implement said method , what sort of design reasoning could be involved behind the exclusion of such a simple method?
The triviality of the implementation depends on the scope of the implementation. It is trivial to implement when you know how to compare each value. Application code usually knows how to compare the types used in that application. But it is not trivial to implement in the general case for arbitrary types, and that is the situation for the language and standard library.
Figuring out if a slice contains a certain object is an O(n) operation where n is the length of the slice. This would not change if the language provided a function to do this. If your code relies on frequently checking if a slice contains a certain value, you should reevaluate your choice of data structures; a map is usually better in these kind of cases. Why should the standard library include functions that encourage you to use the wrong data structure for the task you have?

primecoin? Node.JS vs Haskell applicability [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I was reading about primecoin when it linked me to Cunningham chains. Now that I know what a cunningham chain is, and I couldn't find an implementation in a good language, I need to implement it. Should I use Node.JS for it? I was thinking of using Haskell, but then I'd have to think to much. I think Node.JS will work better since it has better numerical support, and I can make a Node.JS website that uses socket.io to offload my prime computation to the background of clients using my website (essentially pay2view).
For example: One reason I thought haskell is suited for this is because you can make a lazy function that will stream out the values of each chain. Also runs on bare metal with no browser, but im not sure that's much of an advantage.
Computing Cunningham chains effectively requires Bignums.
Node.js uses V8 which can efficiently represent 31-bit signed integers. That isn't nearly big enough for Cunningham chains.
Haskell has architecture native integers and supports efficient Bignum calculation through GMP.
V8 does not yet have efficient Bignum support.
You are likely to get better performance from a Haskell implementation, particularly if you avoid using Strings entirely.

What is the name of this software design behavior? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
When a software has a set of functionality where some of the functionality is provided with multiple implementations and the software automatically decides which one to use. So for instance:
An image editor that has image effects and some its effects like Blur, Median, etc is provided with both CPU and GPU implementations but not directly exposed to the user as options but rather the software decides which one to use based on the user's hardware.
Or in another case where the software chooses which sorting algorithm to use based on the data it has on the items to sort.
I guess this only happens in performance related features.
But what's the name of this feature/idea when a software has this workflow?
Is it called transparent execution? Or context sensitive? I seem to recall a term used to describe this behavior.
EDIT: Btw I am also interested in hearing the marketing term for this? Like ProgramX supports transparent execution.
This is strategy pattern.
You pass the same object to multiple implementations where the difference is the algorithm. This is a classic case of strategy pattern.
Sounds like the facade design pattern, from the GOF book page 185:
Provide a unified interface to a set
of interfaces in a subsystem. Facade
defines a higher level interface that
makes the subsystem easier to use.

Resources