Since answers to similar questions only go as far as to make the distinction between apps and systems Hungarian - is there any comprehensive list of apps Hungarian prefixes that could be used to maintain consistency between Windows API identifiers and the rest of my code?
Hungarian notation is the practice of adding prefixes to the names of variables, to give additional information about the variable. The C++ Core Guidelines discourage prefix notation (for example, Hungarian notation).Internally, the Windows team no longer uses it. But its use remains in samples and documentation.
The notation was conceived to allow you to know the type of the variable, but modern environments have seriously mitigated the need for this. It means that you can avoid having to do this.However, as for whether using Hungarian notation in your code depends on yourself.
For more details about comprehensive list of apps Hungarian prefixes, I suggest you could refer to:Coding Style Conventions ,Windows Coding Conventions and Hungarian Notation
Related
For many programming languages there are style guides available,
e.g. PEP8 for Python, this Matlab style guide or the style guides by Google.
For Modelica I found the conventions described in the Users Guide,
but is there something more comprehensive available?
And, ideally, a tool that helps with the re-formatting, indentation etc.?
The guidelines in the Modelica User's Guide are the only ones I am aware of. The topic has been discussed several times at the design meetings and I've written one paper that discussed the topic but didn't really propose concrete guidelines.
Part of the issue is that while the Modelica Association might have their guidelines (as your've seen), they don't represent any particular business or industries guidelines which might be different. In other words, I could envision having many different guidelines floating around that are tailored to specific types of models or specific industry conventions. But the Modelica ones are the only ones I am specifically aware of (although it would not surprise me if large organizations using did have their own formal style guidelines).
I'm learning scheme and until now have been using guile. I'm really just learning as a way to teach myself a functional programming language, but I'd like to publish an open source project of some sort to reenforce the study— not sure what yet... I'm a web developer, so probably something webby.
It's becoming apparent that publishing scheme code isn't very easy to do, with all these different implementations and no real standards beyond the core of the language itself (R5RS). For example, I'm almost certainly going to need to do basic IO on disk and over a TCP socket, along with string manipulation, such as scanning/regex, which seems not to be covered by R5RS, unless I'm not seeing it in the document. It seems like Scheme is more of a "concept" than a practical language... is this a fair assessment? Perhaps I should look to something like Haskell if I want to learn a functional programming language that lends itself more to use in open source projects?
In reality, how much pain do the differing scheme implementations pose when you want to publish an open source project? I don't really fancy having to maintain 5 different functions for basic things like string manipulation under various mainstream implementations (Chicken, guile, MIT, DrRacket). How many people actually write scheme for cross-implementation compatibility, as opposed to being tightly coupled with the library functions that only exist in their own scheme?
I have read http://www.ccs.neu.edu/home/dorai/scmxlate/scheme-boston/talk.html, which doesn't fill me with confidence ;)
EDIT | Let's re-define "standard" as "common".
I believe that in Scheme, portability is a fool's errand, since Scheme implementations are more different than they are similar, and there is no single implementation that other implementations try to emulate (unlike Python and Ruby, for example).
Thus, portability in Scheme is analogous to using software rendering for writing games "because it's in the common subset between OpenGL and DirectX". In other words, it's a lowest common denominator—it can be done, but you lose access to many features that the implementation offers.
For this reason, while SRFIs generally have a portable reference implementation (where practical), some of them are accompanied by notes that a quality Scheme implementation should tailor the library to use implementation-specific features in order to function optimally.
A prime example is case-lambda (SRFI 16); it can be implemented portably, and the reference implementation demonstrates it, but it's definitely less optimal compared to a built-in case-lambda, since you're having to implement function dispatch in "user" code.
Another example is stream-constant from SRFI 41. The reference implementation uses an O(n) simulation of circular lists for portability, but any decent implementation should adapt that function to use real circular lists so that it's O(1).†
The list goes on. Many useful things in Scheme are not portable—SRFIs help make more features portable, but there's no way that SRFIs can cover everything. If you want to get useful work done efficiently, chances are pretty good you will have to use non-portable features. The best you can do, I think, is to write a façade to encapsulate those features that aren't already covered by SRFIs.
† There is actually now a way to implement stream-constant in an O(1) fashion without using circular lists at all. Portable and fast for the win!
Difficult question.
Most people decide to be pragmatic. If portability between implementations is important, they write the bulk of the program in standard Scheme and isolate non-standard parts in (smallish) libraries. There have been various approaches of how exactly to do this. One recent effort is SnowFort.
http://snow.iro.umontreal.ca/
An older effort is SLIB.
http://people.csail.mit.edu/jaffer/SLIB
If you look - or ask for - libraries for regular expressions and lexer/parsers you'll quickly find some.
Since the philosophy of R5RS is to include only those language features that all implementors agree on, the standard is small - but also very stable.
However for "real world" programming R5RS might not be the best fit.
Therefore R6RS (and R7RS?) include more "real world" libraries.
That said if you only need portability because it seems to be the Right Thing, then reconsider carefully if you really want to put the effort in.
I would simply write my program on the implementation I know the best. Then if necessary port it afterwards. This often turns out to be easier than expected.
I write a blog that uses Scheme as its implementation language. Because I don't want to alienate users of any particular implementation of Scheme, I write in a restricted dialect of Scheme that is based on R5RS plus syntax-case macros plus my Standard Prelude. I don't find that overly restrictive for the kind of algorithmic programs that I write, but your needs may be different. If you look at the various exercises on the blog, you will see that I wrote my own regular-expression matcher, that I've done a fair amount of string manipulation, and that I've snatched files from the internet by shelling out to wget (I use Chez Scheme -- users have to provide their own non-portable shell mechanism if they use anything else); I've even done some limited graphics work by writing ANSI terminal sequences.
I'll disagree just a little bit with Jens. Instead of porting afterwards, I find it easier to build in portability from the beginning. I didn't use to think that way, but my experience over the last three years shows that it works.
It's worth pointing out that modern Scheme implementations are themselves fairly portable; you can often port whole programs to new environments simply by bringing the appropriate Scheme along. That doesn't help library programmers much, though, and that's where R7RS-small, the latest Scheme definition, comes in. It's not widely implemented yet, but it provides a larger common core than R5RS.
since Excel Solver is quite slow to run on thousands of optimizations (the reason being that it uses the spreadsheet as interface), I'm trying to implement a similar (problem-specific) solver in C++ (with Visual Studio 2010, on a Win 7 64-bit platform). I would include the DLL via a Declare statement in VBA and already have experience in doing this, so this is not the problem.
My problem would be minimizing the sum of squared errors between empirical data and a target function which is non-linear but smooth, and the problem would include non-negativity (X>=0) or even positivity constraints (e.g. X>=0.00000001), with X denoting the decision variable.
I'm looking for a robust, proven implementation. It may be part of an established library.
For example, I've already looked into what ALGLIB has in store (see http://www.alglib.net/optimization/) and it seems only one of their algorithms accepts bounded constraints. But I don't know what it's worth, though, that's why I'm trying to gather some opinions.
Or, on another note, would it be advisable to augment ALGLIB's Levenberg-Marquardt algorithm with such basic constraints, for example by rejecting every intermediate solution that does not satisfy my constraints? (guess that won't do it, but it's still worth asking)
There are modifications of the Levenberg-Marquardt method that add support for inequality constraints. I know about one library that implements such an algorithm:
levmar (GPL).
If you would like to modify an existing algorithm, rejecting bad solutions won't do, the optimization will likely get stuck. But you can make a variable substitution, e.g. to ensure that X > 0.1 you can use t^2+0.1 instead of X.
I use this method as a workaround for the lack of built-in box constraints in my program. Here is a quote from Data fitting in the chemical sciences by Peter Gans that describes it better:
https://github.com/wojdyr/fityk/wiki/InequalityConstraints
We find OPTIF9 and UNCMIN to be the standard methods of choice.
You should be able to link them in a library, and call them from C++,
if you don't want to bother compiling Fortran.
A way to put limits on the search space is to transform the parameters, such as by a logit function.
Have you looked into the Microsoft Solver Foundation? The express edition is free, and comes with a .NET 4.0 dll. I found it fairly easy to use. On the other hand, I don't know how large of a problem you are talking: there are some limitations in the number of variables in the express edition.
I would like to ask you about what formal system could be more interesting to implement from scratch/reverse engineer.
I've looked through some existing and open-source projects of logical/declarative programming systems. I've decided to make up something similar in my free time, or at least to catch the general idea of implementation.
It would be great if some of these systems would provide most of the expressive power and conciseness of modern academic investigations in logic and its relation with computational models.
What would you recommend to study at least at the conceptual level? For example, Lambda-Prolog is interesting particularly because it allows for higher order relations, but AFAIK is based on intuitionist logic and therefore lack the excluded-middle principle; that's generally a disadvantage for me.
I would also welcome any suggestions about modern logical programming systems which are less popular but more expressive/powerful.
Prolog was the first language which changed my point of view at programming. But later I found it to be not so high-level as I'd like to see it.
Curry - I've tried only Munster CC, and found it somewhat inconvenient. Actually, at this point, I decided to stop ignoring Haskell.
Mercury has many things which I wanted to see in Prolog. I have a really good expectation about the possibility to distinguish modes of rules. Programs written in Mercury should inspire compiler to do a lot of optimizations (I guess).
Twelf.
It generalizes lambda-prolog significantly, and it's a logical framework and a metalogical framework as well as a logic programming language. If you need a language with a heavy focus on logic as well as computation, it's the best I know of.
If I were to try to extend a logic based system, I'd choose Prolog Cafe as it is small, open sourced, standards compliant, and can be easily integrated into java based systems.
For the final project in a programming languages course I took, we had to embed a Prolog evaluator in Scheme using continuations and macros. The end result was that you could freely mix Scheme and Prolog code, and even pass arbitrary predicates written in Scheme to the Prolog engine.
It was a very instructive exercise. The first 12 lines of code (and and or) literally took about 6 hours to write and get correct. It was pretty much the search logic, written very concisely using continuations. The rest followed a bit more easily. Then once I added the unification algorithm, it all just worked.
First of all, is this only possible on algorithms which have no side effects?
Secondly, where could I learn about this process, any good books, articles, etc?
COQ is a proof assistant that produces correct ocaml output. It's pretty complicated though. I never got around to looking at it, but my coworker started and then stopped using it after two months. It was mostly because he wanted to get things done quicker, but if you need to verify an algorithm this might be a good idea.
Here is a course that uses COQ and talks about proving algorithms.
And here is a tutorial about writing academic papers in COQ.
It's generally a lot easier to verify/prove correctness when no side effects are involved, but it's not an absolute requirement.
You might want to look at some of the documentation for a formal specification language like Z. A formal specification isn't a proof itself, but is often the basis for one.
I think that verifying the correctness of an algorithm would be validating its conformance with a specification. There is a branch of theoretical Computer Science called Formal Methods which may be what you are looking for if you need to get as close to proof as you can. From wikipedia,
Formal Methods are a particular kind
of mathematically-based techniques for
the specification, development and
verification of software and hardware
systems
You will be able to find many learning resources and tools from the multitude of links on the linked Wikipedia page and from the Formal Methods wiki.
Usually proofs of correctness are very specific to the algorithm at hand.
However, there are several well known tricks that are used and re-used again. For example, with recursive algorithms you can use loop invariants.
Another common trick is reducing the original problem to a problem for which your algorithm's proof of correctness is easier to show, then either generalizing the easier problem or showing that the easier problem can be translated to a solution to the original problem. Here is a description.
If you have a particular algorithm in mind, you may do better in asking how to construct a proof for that algorithm rather than a general answer.
Buy these books: http://www.amazon.com/Science-Programming-Monographs-Computer/dp/0387964800
The Gries book, Scientific Programming is great stuff. Patient, thorough, complete.
Logic in Computer Science, by Huth and Ryan, gives a reasonably readable overview of modern systems for verifying systems. Once upon a time people talked about proving programs correct - with programming languages which may or may not have side effects. The impression I get from this book and elsewhere is that real applications are different - for instance proving that a protocol is correct, or that a chip's floating point unit can divide correctly, or that a lock-free routine for manipulating linked lists is correct.
ACM Computing Surveys Vol 41 Issue 4 (October 2009) is a special issue on software verification. It looks like you can get to at least one of the papers without an ACM account by searching for "Formal Methods: Practice and Experience".
The tool Frama-C, for which Elazar suggests a demo video in the comments, gives you a specification language, ACSL, for writing function contracts and various analyzers for verifying that a C function satisfies its contract and safety properties such as the absence of run-time errors.
An extended tutorial, ACSL by example, shows examples of actual C algorithms being specified and verified, and separates the side-effect-free functions from the effectful ones (the side-effect-free ones are considered easier and come first in the tutorial). This document is also interesting in that it was not written by the designers of the tools it describe, so it gives a fresher and more didactic look at these techniques.
If you are familiar with LISP then you should definitely check out ACL2: http://www.cs.utexas.edu/~moore/acl2/acl2-doc.html
Dijkstra's Discipline of Programming and his EWDs lay the foundation for formal verification as a science in programming. A simpler work is Wirth's Systematic Programming, which begins with the simple approach to using verification. Wirth uses pre-ISO Pascal for the language; Dijkstra uses an Algol-68-like formalism called Guarded (GCL). Formal verification has matured since Dijkstra and Hoare, but these older texts may still be a good starting point.
PVS tool developed by Stanford guys is a specification and verification system. I worked on it and found it very useful for Theoram Proving.
WRT (1), you will probably have to create a model of the algorithm in a way that "captures" the side-effects of the algorithm in a program variable intended to model such state-based side-effects.