primecoin? Node.JS vs Haskell applicability [closed] - algorithm

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I was reading about primecoin when it linked me to Cunningham chains. Now that I know what a cunningham chain is, and I couldn't find an implementation in a good language, I need to implement it. Should I use Node.JS for it? I was thinking of using Haskell, but then I'd have to think to much. I think Node.JS will work better since it has better numerical support, and I can make a Node.JS website that uses socket.io to offload my prime computation to the background of clients using my website (essentially pay2view).
For example: One reason I thought haskell is suited for this is because you can make a lazy function that will stream out the values of each chain. Also runs on bare metal with no browser, but im not sure that's much of an advantage.

Computing Cunningham chains effectively requires Bignums.
Node.js uses V8 which can efficiently represent 31-bit signed integers. That isn't nearly big enough for Cunningham chains.
Haskell has architecture native integers and supports efficient Bignum calculation through GMP.
V8 does not yet have efficient Bignum support.
You are likely to get better performance from a Haskell implementation, particularly if you avoid using Strings entirely.

Related

How is distributed memory parallelism handled in Rust? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
How is distributed memory parallelism handled in Rust? By that, I mean language constructs, libraries, or other features to handle computing on something like a cluster akin to what MPI provides C, but not necessarily using the same primitives or methodology. In the Rustonomicon, I see a discussion of threads and concurrency, but I don't see any discussion on parallelizing across multiple computers.
To the best of my knowledge, there isn't really anything built into the language for distributed computing (which is understandable, since that's arguably not really the language's major focus, or at least wasn't back in the day). I don't believe there's any particularly popular crate or another for distributed computing either. Actix is probably the only actor crate that has achieved any traction, and it supports HTTP, but I don't think it is targeted at HPC/supercomputer setups. You also definitely would want to check out Tokio, which seems to be pretty much the library for asynchronous programming in Rust, and is specifically targeted towards network IO operations.
At the present point in time, if you're looking to replicate MPI, my guess would be that your best bet is to use FFI to a C-based MPI library. It appears that there's been a handful of attempts to create bindings to MPI for Rust, but I'm not sure that any of them are particularly complete.

How do Hardware Description Languages differ from General Purpose languages at the low level? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Question:
How do Hardware languages (HDLs) differ from general purpose languages such as Python, Java, etc. In particular, what is the primary trade-off that causes general purpose languages to be sub-optimal for FPGA's when compared to VHDL and Verilog?
Context:
I'm a programmer but definitely work at a high level of abstraction such as JavaScript, tinkering with API's, etc. My low-level knowledge is very limited but I am playing around with an FPGA and have some novice questions that I cannot solve with Google or Wikis.
Considering I am a novice, please do not vote harshly against this post. Just state your suggestions for the question and I will happily revise! :)
Example:
For example, why isn't everyone just coding FPGAs and ASICs with Python or C# instead of Verilog or VHDL? I understand that there are some Python libraries, but I have read that they are limited in their viable use-cases. I would greatly appreciate someone shining some light on why HDLs are necessary and beneficial and why general purpose languages are not optimal in comparison for these scenarios.
Thanks in advance!
This is a broad opinionated question, but I think there is a short answer. In some sense, they are all programming languages, i.e text descriptions that gets compiled into a set of machine instructions to be executed on a host machine(software).
But an HDL is also a text description that gets compiled into a set of machine instructions to build another machine (hardware).
Technically, any programming language could be used to describe hardware (SystemC in C++ as an example), Verilog and VHDL were specifically developed to model and simulate hardware most efficiently.

Functional programming style vs performance in Ruby [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I love functional programming and I love Ruby as well. If I can code an algorithm in a functional style rather than in a imperative style, I do it. I tend to do not update or reuse variables as much as possible, avoid using "bang!" methods and use "map", "reduce", and similar functions instead of "each" or danger loops, etc. Basically I try to follow the rules of this article.
The problem is that usually the functional solution is much slower that the imperative one. In this article there are clear and scary examples about that, being until 15-20 times slower in some cases. After reading it and doing some benchmarks I am afraid of keep using the functional style, at least in Ruby.
By the other hand I feel more comfortable writing code in functional style because it is smart and clean, it tends to less bugs, and I think is more "correct", specially nowadays that we can use concurrency and parallelism for better performance.
So I am very confused about which style to use in Ruby. Any wise recommendation will be appreciated.

Common knowledge on haskell performance [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
Hello Haskellers out there!
I have the feeling that questions on performance arise more often and that the knowledge on which functions/algorithms/libraries are fast and stable is sparse.
There are of course libraries like Criterion which allow to make measurements on one's own, and there is the profiler which can be invoked by
> ghc -O2 --make program.hs -prof -auto-all
> ./program +RTS -s
as excellently explained by #DonStewart in Tools for analyzing performance of a Haskell program
I know that:
the use of read and show is usually a bottleneck (for the read-function in the case of numbers there is the Numeric package which brings a performance speedup
and there are the Sequence, Array, Vector and Map libraries, which are often better fit to solve a problem than to use lists or nested lists
Text and Bytestring are a better option than String
I recently saw that even the use of the standard number generator slows down a program significantly and that mwc-random is a lot faster.
Also the answers of Python faster than compiled Haskell? revealed that the standard sorting algorithm is definitely improvable
The usage of Int rather than Integer, BangPatterns and strict folds often yield an increase in performance
There are conduit and pipes to "stricten" IO, (which I have to admit I haven't used yet)
type signatures are general an improvement
What are other common pitfalls and bottlenecks one tends to use?
How to solve those?
The topics that come to my mind are:
functions
data structures
algorithms
LANGUAGE extensions (for GHC)?
compiler options?

Any benchmarks for parser generators? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
Has anyone seen a good comparison of parser generators' performance?
I'm particularly interested in:
1) recursive ascent parser generators for LALR(1) grammars;
2) parser generators which produce C/C++ based parsers.
Are you interested in how fast the parser generators run? Depends of the type of technology of the parsing engine it supports, and the care of the guy who implemented the parser generator. See this answer for some numbers about LALR/GLR parser generators for real languages: https://stackoverflow.com/a/14151966/120163 IMHO, this isn't very important; parser generators are mostly a lot faster than the guy using them.
If the question is, how fast are the generated parsers? you get different answers. LALR parsers can be implemented with a few machine instructions per GOTO transition (using directly-indexed GOTO tables), and a few per reduction. That's pretty hard to beat.

Resources