How to optimize F# programs generally [closed] - performance

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I have an interpreter for a lisp-style language in F#, and have just gotten into the optimization phase. Simple tests of the evaluator reveal that I need to optimize it in an extreme manner. However, I don't have a general background in F# performance or optimization.
Are there any good general knowledge resources for F# program optimization? Particularly useful are tips of keeping cache coherency and surprising primitive performance issues. A cursory search hasn't revealed much on the internet.
Thank you!

Performance optimization tricks are all myths. As #svick said, it's no better way than profiling your program, identifying hot spots and optimizing them using concrete benchmarks.
Since you asked, here is some information floating around:
A bunch of good answers about F# performance: C# / F# Performance comparison
Using inline for performance optimization: Use of `inline` in F#
Using structs instead of records for better performance: http://theburningmonk.com/2011/10/fsharp-performance-test-structs-vs-records/
Array-oriented programming for performance-critical code: http://sharp-gamedev.blogspot.com/2010/03/thoughts-about-f-and-xbox-games.html
Concrete case studies of performance optimization:
FSharp runs my algorithm slower than Python
F# seems slower than other languages... what can I do to speed it up?

Related

Execution time, high- vs. low-level programming language [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
When talking about execution time between high- and low-level programming languages I often hear that low-level languages performs a bit better than high-level.
Of course a low level can perform worse than a high-level depending on the programmer and algorithms. But if we use the very minimal code needed to do different tasks. Will there most often be differences in execution time due to the abstraction level of different languages?
And also, does anyone know any good books about these kind of topic?
First off, low-level vs. high-level is not a well-defined language attribute. It tends to be used to refer to the accessibility of primitive machine capabilites, to the abstraction facilities the language provides, and again to describe the specific abstraction level of available libraries. And while these aspects are arguably correlated, they are not dependably so -- none of them is required for another.
Lack of access to machine primitives naturally removes flexibility for general-purpose performance programming, but languages without such access can achieve high performance for particular domains by using libraries and/or runtimes that are specialized for that domain (e.g., NumPy and Matlab performance with linear algebra).
Poor abstraction facilities make design, development, use, reuse, and maintenance harder. This doesn't necessarily impact potential performance directly -- but practically speaking, efforts put into mechanics aren't being used to improve performance. Likewise, lack of high-level libraries doesn't necessarily impact performance directly, but poorly-built re-implementations of essential facilities can have the same effect.
So, to answer your question: in no aspect does language "level" generally determine performance. But, there is always a "but"...
Additionally, there are costs to "wide-spectrum" languages such as C++, which combine access to primitive capabilities with good abstraction facilities and extensive libraries: complexity and cognitive load, which (IMHO) are the drivers of the aforementioned correlation between the different aspects of language "level".

VHDL for scientific computing [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I was wondering if folks use VHDL/FPGAs in scientific computing.
An example scenario that I was thinking off was say:
Construct an arbitrary precision floating point adder
Configure an FPGA board to then add such numbers
So I was looking for references (example code) where VHDL/FPGAs have been used in scientific computing.
Thanks in advance.
There are several vendors who build heterogeneous computing systems using FPGAs. I doubt you'll find complete source code for such systems.
SRC Computing
Convey Computer
Mitrionics. A reseller of other systems.
Novo-G. An academic project.
Look into radio astronomy. With arrays such as the VLA and ALMA, the massively parallel correlator is the part that could be considered most important. These typically use FPGAs but could use custom-designed chips for extreme performance at higher cost.
Some fine reading:
https://science.nrao.edu/facilities/cdl/digital-signal-processing
http://web.njit.edu/~gary/728/Lecture8.html

On average, how efficient is a scripting engine? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
When dealing with a scripting engine, I'd expect them to be fractions slower than code compiled to assembly. What sort of efficiency numbers are there for major scripting languages (if any)?
Or is this a futile question?
Thanks.
Go to http://shootout.alioth.debian.org/ for actual numbers.
As you can see, languages that are usually compiled (i.e. C, C++, etc.) destroy interpreted languages in terms of performance (both running time and memory).
But the question is odd.
Any scripting language can be made compilable into native code (i.e. assembly) and vice versa (e.g. HipHop: PHP to C++ compiler).
And language aside, some compilers are much better than others because they know how to optimize the code to run faster natively. And they also differ between single-core and multi-core systems.
So if I can take a guess... if you're making a decision on what language to use based on performance (especially... ESPECIALLY if you're talking about scripting languages), you're probably making a mistake. There are many more considerations beyond performance that impact on selection of a programming language for a project.
If I guessed wrong, sorry!

Which single software quality aspect you always strive to achieve? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 12 years ago.
Is it performance, scalabilty, maintainability, usability or what ? What is it that you always strive to achieve while creating a good software or application and why ?
I always prefer maintainability above anything. It's ok if its not otimized or has great user interface - it has to be maintainable. I'm sure each one of us would have something very important to say here. Whole idea is to gather as many as perspectives for improvement in software development.
There's a false premise here: that you want to optimize only one single aspect.
You need to strike a balance, even if that means none of the aspects is perfectly optimised.
For example, your suggestion of striving for maintainability is futile if the usability suffers so much that no-one wants to use your product.
(It could even be interpreted as a little bit selfish, putting your priorities for an easier life over those of the customer.)
Similarly, when I see people striving to get the fastest possible performance out of a component, when there is little customer-need for that... frustrating when they are impacting maintainability, or missing the opportunity to improve security.
It has to do what the customer wants it to do
It doesn't matter how fast, how efficient, how maintainable or how testable a piece of software is if it doesn't do what the customer wants then it's no use to them
A good usability for the end user and some elegance in the code for the fellow developers that might have to work on the same project.
Readability.
If code is readable it's easier to understand! Things like performance optimizations can come later if required after profiling your code.
I think all the other 'goals' you mention can be built on providing you have a readable -and therefore understandable - codebase

Most useful parallel programming algorithm? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 12 years ago.
I recenty asked a question about parallel programming algorithms which was closed quite fast due to my bad ability to communicate my intent:
https://stackoverflow.com/questions/2407631/what-is-the-most-useful-parallel-programming-algorithm-closed
I had also recently asked another question, specifically:
Is MapReduce just a generalisation of another programming principle?
The other question was specifically about map reduce and to see if mapreduce was a more specific version of some other concept in parallel programming. This question (about a useful parallel programming algorithm) is more about the whole series of algorithms for parallel programming. You will have to excuse me though as I am quite new to parallel programming, so maybe MapReduce or something that is a more general form of mapreduce is the "only" parallel programming construct which is available, in which case I apologise for my ignorance.
There's probably two "main" parallel programming constructs.
Map/Reduce is one. At a high, ultra-generic level, it's just parallel divide-and-conquer. Send out the individual bits to parallel handlers, and combine the results when they arrive.
The other main parallel programming construct is the pipeline... pieces of work go through a series of stages, each of which can be run in a parallel thread.
I think that just about any parallelization algorithm is going to boil down to one of those two. I could be wrong, of course.

Resources