When to use declarative programming over imperative programming [closed] - prolog

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
As far as I know the main difference between declarative and imperative programming is the fact that in declarative programming you rather specify what the problem is, while in imperative programming you state exactly how to solve a problem.
However it is not entirely clear for me when to use one over another. Imagine you are imposed to solve a certain problem, according to which properties you decide to tackle this down declaratively (i.e using prolog) or imperatively (i.e using Java)? For what kind of problems would you prefer to use one over the other?

Imperative programming is closer to what the actual machine performs. This is a quite low level form of programming, and the more complex your application grows, the harder it will be for you to grasp all details at such a low level. On the plus side, being close to the machine, you can write quite performant code if you are good at that.
Declarative programming is more abstract and higher level: With comparatively little code, you can express quite sophisticated relationships in a way that can be more easily seen to be correct.
To see an important difference, compare for example pure Prolog with Java: Suppose I take away one of the rules in a Prolog program. I know a priori that this can make the program at most more specific: Some things that held previously may now no longer hold.
On the other hand, suppose I take away a statement in a Java program: Nothing much can be said about the effect in general. The program may even no longer compile.
So, changes in an imperative program can have very unforeseen effects, and are extremely hard to reason about, because there are few guarantees and invariants, and many things are implicit in some global state of the program. This makes imperative programming very error-prone.

Related

How is distributed memory parallelism handled in Rust? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
How is distributed memory parallelism handled in Rust? By that, I mean language constructs, libraries, or other features to handle computing on something like a cluster akin to what MPI provides C, but not necessarily using the same primitives or methodology. In the Rustonomicon, I see a discussion of threads and concurrency, but I don't see any discussion on parallelizing across multiple computers.
To the best of my knowledge, there isn't really anything built into the language for distributed computing (which is understandable, since that's arguably not really the language's major focus, or at least wasn't back in the day). I don't believe there's any particularly popular crate or another for distributed computing either. Actix is probably the only actor crate that has achieved any traction, and it supports HTTP, but I don't think it is targeted at HPC/supercomputer setups. You also definitely would want to check out Tokio, which seems to be pretty much the library for asynchronous programming in Rust, and is specifically targeted towards network IO operations.
At the present point in time, if you're looking to replicate MPI, my guess would be that your best bet is to use FFI to a C-based MPI library. It appears that there's been a handful of attempts to create bindings to MPI for Rust, but I'm not sure that any of them are particularly complete.

How do Hardware Description Languages differ from General Purpose languages at the low level? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Question:
How do Hardware languages (HDLs) differ from general purpose languages such as Python, Java, etc. In particular, what is the primary trade-off that causes general purpose languages to be sub-optimal for FPGA's when compared to VHDL and Verilog?
Context:
I'm a programmer but definitely work at a high level of abstraction such as JavaScript, tinkering with API's, etc. My low-level knowledge is very limited but I am playing around with an FPGA and have some novice questions that I cannot solve with Google or Wikis.
Considering I am a novice, please do not vote harshly against this post. Just state your suggestions for the question and I will happily revise! :)
Example:
For example, why isn't everyone just coding FPGAs and ASICs with Python or C# instead of Verilog or VHDL? I understand that there are some Python libraries, but I have read that they are limited in their viable use-cases. I would greatly appreciate someone shining some light on why HDLs are necessary and beneficial and why general purpose languages are not optimal in comparison for these scenarios.
Thanks in advance!
This is a broad opinionated question, but I think there is a short answer. In some sense, they are all programming languages, i.e text descriptions that gets compiled into a set of machine instructions to be executed on a host machine(software).
But an HDL is also a text description that gets compiled into a set of machine instructions to build another machine (hardware).
Technically, any programming language could be used to describe hardware (SystemC in C++ as an example), Verilog and VHDL were specifically developed to model and simulate hardware most efficiently.

Advanced Rudimentary Computing? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Lets say that my definition of 'rudimentary programming' refers to the fundamental tools employed for a computer to perform a task.
Considering programming rudiments, the learning spectrum usually looks something like this:
Variables, data types and variable memory
Arrays/Lists and their manipulation
Looping and conditionals
Functions
Classes
Multi threading/processing
Streams (hard-disk and web)
My question is, have I missed any of the major rudiments? Is there a 'next' to the spectrum that still eludes me?
I think you missed the most important one: algorithms. Understanding the complexity, know the situation to use them, why use them and more important, how to implement them.
I'm pretty sure that you already know a lot about algorithms but if you think that your tool-knowledge (aka the programming languages) are good enough, you should start focus, more, on the algorithms.
A great book to start is: Introduction to Algorithms, from Thomas H. Cormen

Functional programming style vs performance in Ruby [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I love functional programming and I love Ruby as well. If I can code an algorithm in a functional style rather than in a imperative style, I do it. I tend to do not update or reuse variables as much as possible, avoid using "bang!" methods and use "map", "reduce", and similar functions instead of "each" or danger loops, etc. Basically I try to follow the rules of this article.
The problem is that usually the functional solution is much slower that the imperative one. In this article there are clear and scary examples about that, being until 15-20 times slower in some cases. After reading it and doing some benchmarks I am afraid of keep using the functional style, at least in Ruby.
By the other hand I feel more comfortable writing code in functional style because it is smart and clean, it tends to less bugs, and I think is more "correct", specially nowadays that we can use concurrency and parallelism for better performance.
So I am very confused about which style to use in Ruby. Any wise recommendation will be appreciated.

Should I focus on code quality while Rapid prototyping? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
When you are rapid prototyping for features should you really worry about code quality & optimization?.
Looking back at the number of times a "prototype" ended up becoming the product, the answer would be yes.
Don't forget that you are not only prototyping the feature, you are also prototyping the design.
Yes to quality. No to optimization. This question should be community wiki.
I would focus on clarity.
If quality and optimisation are requirements for the prototype then yes. If not, then no. Just because you are doing rapid prototyping you don't abandon standard operating procedures like programming to a specification, using source code control, testing, etc. It is, perhaps, relatively unusual for high performance to be a requirement for a rapidly developed prototype, but that's another matter.
Yes. Focus on quality, clarity and simplicity AND comments to explain what its doing and why (don't bother with the how, unless its really complicated, that is what the code is for).
Just about all work we do here starts out as a what if? And if it works we continue with it.
We write comments that describe what should happen, before we write the code, then write the code to match the comments. Writing the comments first forces you to think about how you will structure it all. We've found that it prevents a lot of FALSE assumptions and actually makes development faster.
It also makes reusing this when you come back to it so much easier - you don't need to read the code and understand it, just read the comments. Don't go for the nonesense of self-documenting code, all that does is self-document the bugs, you've got nothing to check to see if the code matches the comments/documentation at all.
You can worry about optimization later - see this description of a huge win I got by changing from MFC CMaps to STL when working on a hobby project parsing some Apache log files. This was done after I had the initial concept working and only when it became apparent there was a problem with performance.

Resources