How is distributed memory parallelism handled in Rust? [closed] - parallel-processing

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
How is distributed memory parallelism handled in Rust? By that, I mean language constructs, libraries, or other features to handle computing on something like a cluster akin to what MPI provides C, but not necessarily using the same primitives or methodology. In the Rustonomicon, I see a discussion of threads and concurrency, but I don't see any discussion on parallelizing across multiple computers.

To the best of my knowledge, there isn't really anything built into the language for distributed computing (which is understandable, since that's arguably not really the language's major focus, or at least wasn't back in the day). I don't believe there's any particularly popular crate or another for distributed computing either. Actix is probably the only actor crate that has achieved any traction, and it supports HTTP, but I don't think it is targeted at HPC/supercomputer setups. You also definitely would want to check out Tokio, which seems to be pretty much the library for asynchronous programming in Rust, and is specifically targeted towards network IO operations.
At the present point in time, if you're looking to replicate MPI, my guess would be that your best bet is to use FFI to a C-based MPI library. It appears that there's been a handful of attempts to create bindings to MPI for Rust, but I'm not sure that any of them are particularly complete.

Related

Are there any computer viruses that affect gpus? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Recent developments in gpus (the past few generations) allow them to be programmed. Languages like Cuda, openCL, openACC are specific to this hardware. In addition, certain games allow programming shaders which function in the rendering of images in the graphics pipeline. Just as code intended for a cpu can cause unintended execution resulting a vulnerability, I wonder if a game or other code intended for a gpu can result in a vulnerability.
The benefit a hacker would get from targeting the GPU is "free" computing power without having to deal with the energy cost. The only practical scenario here is crypto-miner viruses, see this article for example. I don't know details on how they operate, but the idea is to use the GPU to mine crypto-currencies in the background, since GPUs are much more efficient than CPUs at this. These viruses will cause substential energy consumption if unnoticed.
Regarding an application running on the GPU causing/using a vulnerability, the use-cases here are rather limited since security-relevant data usually is not processed on GPUs.
At most you could deliberately make the graphics driver crash and this way sabotage other programs from being properly executed.
There already are plenty security mechanisms prohibiting reading other processes' VRAM etc., but there always is some way around.

How is the impl. of Rust's channels functionally different from Go's impl. of channels? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I specifically intend to use the channel functionality of either language in developing a scalable web service. I am unclear at present about which one would be easier to implement but also which one would better fit the intended design, help maintain uptime, require minimal overhead etc.
I understand that the Go implementation uses CSP methodology, though I'm unclear exactly what the Rust implementation is based on and whether it is even analogous to the Go version.
Is there any similarity or are they too different to compare to each other?
Are there use-cases where both implementations would operate mostly the same?
There is no such thing as the Rust channel.
Whereas in Go channels are a language concept provided by the Go run-time, in Rust channels can be implemented in a library, and therefore there are as many channels implementations as there are libraries, each with different goals and trade-offs:
There is one MPSC (Multi-Producer, Single-Consumer) channel in the standard library.
There are MPMC (Multi-Producer, Multi-Consumer) channels in the crossbeam ecosystem and in the async-std crate1.
All of those implementations offer different interfaces, capabilities, and performance trade-offs.
1 Not an official crate, simply a port of std functionalities to async.

Parallel computing: from theory to practice [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I studied how to optimize algorithms for multiprocessor systems. Now I would understand in main lines how these algorithms can be transformed into code.
I know that exist some libraries MPI based that helps the developement of software portable to different type of systems, but is right the word "portable" that makes me confused: how the program can be authomatically adapted to an arbitrary number of processors at runtime, since this is an option of mpirun? How the software can decide the proper topology (mesh, hypercube, tree, ring, etc)? The programmer can specify the preferred topology through MPI?
you start the application with a fixed number of cores. Thus, you cannot automatically adapted to an arbitrary number of processors at runtime.
You can tune your software to the topology of your cluster. This is really advanced and for sure not portable. It only makes sense if you have a fixed cluster and are striving for the last bit of performance.
Best regards, Georg

Can GPU be used for a general programming? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
It seems like for special tasks GPU can be 10x or more powerful than the CPU.
Can we make this power more accessible and utilise it for common programming?
Like having cheap server easily handling millions of connections? Or on-the-fly database analytics? Map/reduce/Hadoop/Storm - like stuff with 10x throughput? Etc?
Is there any movement in such direction? Any new programming languages or programming paradigms that will utilise it?
CUDA or OpenCL are good implementations of GPU programming.
GPU programming uses Shaders to process input buffers and almost instantly generate result buffers. Shaders are small algorithms units, mostly working with float values, which contains their own data context (input buffers and constants) to produce results. Each Shader is isolated from the other Shaders during a task, but you can chain them if required.
GPU programming won't be good at handling HTTP requests since this is mostly a complex sequential process, but it will be amazing to process, for example, a photo or a neural network.
As soon as you can chunk your data into tiny parallel units, then yes it can help. The CPU will remain better for complex sequential tasks.
Colonel Thirty Two links to a long and interesting answer about this if you want more informations : https://superuser.com/questions/308771/why-are-we-still-using-cpus-instead-of-gpus

primecoin? Node.JS vs Haskell applicability [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I was reading about primecoin when it linked me to Cunningham chains. Now that I know what a cunningham chain is, and I couldn't find an implementation in a good language, I need to implement it. Should I use Node.JS for it? I was thinking of using Haskell, but then I'd have to think to much. I think Node.JS will work better since it has better numerical support, and I can make a Node.JS website that uses socket.io to offload my prime computation to the background of clients using my website (essentially pay2view).
For example: One reason I thought haskell is suited for this is because you can make a lazy function that will stream out the values of each chain. Also runs on bare metal with no browser, but im not sure that's much of an advantage.
Computing Cunningham chains effectively requires Bignums.
Node.js uses V8 which can efficiently represent 31-bit signed integers. That isn't nearly big enough for Cunningham chains.
Haskell has architecture native integers and supports efficient Bignum calculation through GMP.
V8 does not yet have efficient Bignum support.
You are likely to get better performance from a Haskell implementation, particularly if you avoid using Strings entirely.

Resources