It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
When talking about execution time between high- and low-level programming languages I often hear that low-level languages performs a bit better than high-level.
Of course a low level can perform worse than a high-level depending on the programmer and algorithms. But if we use the very minimal code needed to do different tasks. Will there most often be differences in execution time due to the abstraction level of different languages?
And also, does anyone know any good books about these kind of topic?
First off, low-level vs. high-level is not a well-defined language attribute. It tends to be used to refer to the accessibility of primitive machine capabilites, to the abstraction facilities the language provides, and again to describe the specific abstraction level of available libraries. And while these aspects are arguably correlated, they are not dependably so -- none of them is required for another.
Lack of access to machine primitives naturally removes flexibility for general-purpose performance programming, but languages without such access can achieve high performance for particular domains by using libraries and/or runtimes that are specialized for that domain (e.g., NumPy and Matlab performance with linear algebra).
Poor abstraction facilities make design, development, use, reuse, and maintenance harder. This doesn't necessarily impact potential performance directly -- but practically speaking, efforts put into mechanics aren't being used to improve performance. Likewise, lack of high-level libraries doesn't necessarily impact performance directly, but poorly-built re-implementations of essential facilities can have the same effect.
So, to answer your question: in no aspect does language "level" generally determine performance. But, there is always a "but"...
Additionally, there are costs to "wide-spectrum" languages such as C++, which combine access to primitive capabilities with good abstraction facilities and extensive libraries: complexity and cognitive load, which (IMHO) are the drivers of the aforementioned correlation between the different aspects of language "level".
Related
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
Why do we need to organize code? What are the objectives of organizing code. Organizing code is a time consuming process until it becomes habit. I am trying estimate the cost and benefits of organizing any programming code
Imagine you are at the library; none of the books at the library are organized. If your work depends on finding references in books, you will waste a lot of time searching for the books. This may be a quick process if you have only a few hundred books, but when you have thousands or tens of thousands of books, you will need to ensure the books stay organized in order to efficiently locate them. You could also say "Organizing books is a time consuming process", but the end result is that it saves you time when/if they are kept organized.
The same thing happens as software becomes more complex. People won't want to add programs which are not well organized to well organized programs/codebases. It's hard to use/maintain programs which are complex and organized poorly (or not at all).
One of the biggest problems if you are faced with organizing a codebase is that it's very monotonous and time consuming -- it's easy to (unknowingly) introduce changes which result in bugs; these changes should receive significant testing (but it's not likely that a disorganized codebase has high test coverage). Disorganized programs which are reused and/or have long lifetimes usually require significantly more maintenance time over the life of the program.
If you're just banging out a proof of concept that is 100 lines and will remain independent of all other programs, you don't have to obsess over the organization of that program.
Organized code becomes much easier to maintain and extend over time than code that is placed wildly about. That's why programmers take so much care to name variables/methods/etc. well, keep methods short and specific, and so on. I would recommend reading Clean Code by Robert Martin.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I'm interested in designing a scheme flavor for doing audio synthesis, but I'm quite concerned with doing proper garbage collection when catering to the low latencies required for audio. I was wondering if someone in the field might be able to point me towards a garbage collection algorithm that might be suitable for this sort of environment. I was looking at realtime garbage collection, which would seem to make sense, as I'd like to bound the amount of time that the garbage collector takes so I don't get pauses in the audio... though perhaps a collector that's just "fast enough" and distributes its work well would be good enough? I'm not at all worried about multithreading/multiprocessing, and I'm definitely not worried about wasting tons of space in search of these goals. I'm after predictable, simple, and fast.
Thanks!
In a single-process setting on Unix-like OSes, I heard of an amusing approach. (It was experimentally implemented for Nickle, but I don't know if it got merged to master.)
It uses a simple mark-sweep collector, but here's the trick: When you want to run a mark phase, fork(). The child process runs the marker, and sends a list of objects to free over a pipe back to the parent, which can incrementally free them at leisure.
This works because the child is operating in a copy-on-write snapshot of the parent's memory state, maintained with reasonable efficiency by the operating system's memory manager with help from a hardware MMU. Once an object becomes unreachable, it won't become referenced again, so marking from an old snapshot always gives a conservative estimate of objects that can be freed.
edit: Best reference I can find for this work is the Summer of Code proposal for it: http://web.cecs.pdx.edu/~juenglin/revamping.html
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I have an interpreter for a lisp-style language in F#, and have just gotten into the optimization phase. Simple tests of the evaluator reveal that I need to optimize it in an extreme manner. However, I don't have a general background in F# performance or optimization.
Are there any good general knowledge resources for F# program optimization? Particularly useful are tips of keeping cache coherency and surprising primitive performance issues. A cursory search hasn't revealed much on the internet.
Thank you!
Performance optimization tricks are all myths. As #svick said, it's no better way than profiling your program, identifying hot spots and optimizing them using concrete benchmarks.
Since you asked, here is some information floating around:
A bunch of good answers about F# performance: C# / F# Performance comparison
Using inline for performance optimization: Use of `inline` in F#
Using structs instead of records for better performance: http://theburningmonk.com/2011/10/fsharp-performance-test-structs-vs-records/
Array-oriented programming for performance-critical code: http://sharp-gamedev.blogspot.com/2010/03/thoughts-about-f-and-xbox-games.html
Concrete case studies of performance optimization:
FSharp runs my algorithm slower than Python
F# seems slower than other languages... what can I do to speed it up?
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
When dealing with a scripting engine, I'd expect them to be fractions slower than code compiled to assembly. What sort of efficiency numbers are there for major scripting languages (if any)?
Or is this a futile question?
Thanks.
Go to http://shootout.alioth.debian.org/ for actual numbers.
As you can see, languages that are usually compiled (i.e. C, C++, etc.) destroy interpreted languages in terms of performance (both running time and memory).
But the question is odd.
Any scripting language can be made compilable into native code (i.e. assembly) and vice versa (e.g. HipHop: PHP to C++ compiler).
And language aside, some compilers are much better than others because they know how to optimize the code to run faster natively. And they also differ between single-core and multi-core systems.
So if I can take a guess... if you're making a decision on what language to use based on performance (especially... ESPECIALLY if you're talking about scripting languages), you're probably making a mistake. There are many more considerations beyond performance that impact on selection of a programming language for a project.
If I guessed wrong, sorry!
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 12 years ago.
Is it performance, scalabilty, maintainability, usability or what ? What is it that you always strive to achieve while creating a good software or application and why ?
I always prefer maintainability above anything. It's ok if its not otimized or has great user interface - it has to be maintainable. I'm sure each one of us would have something very important to say here. Whole idea is to gather as many as perspectives for improvement in software development.
There's a false premise here: that you want to optimize only one single aspect.
You need to strike a balance, even if that means none of the aspects is perfectly optimised.
For example, your suggestion of striving for maintainability is futile if the usability suffers so much that no-one wants to use your product.
(It could even be interpreted as a little bit selfish, putting your priorities for an easier life over those of the customer.)
Similarly, when I see people striving to get the fastest possible performance out of a component, when there is little customer-need for that... frustrating when they are impacting maintainability, or missing the opportunity to improve security.
It has to do what the customer wants it to do
It doesn't matter how fast, how efficient, how maintainable or how testable a piece of software is if it doesn't do what the customer wants then it's no use to them
A good usability for the end user and some elegance in the code for the fellow developers that might have to work on the same project.
Readability.
If code is readable it's easier to understand! Things like performance optimizations can come later if required after profiling your code.
I think all the other 'goals' you mention can be built on providing you have a readable -and therefore understandable - codebase