Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
According to the gnu make documentation $< refers to the first prerequisite:
$<
The name of the first prerequisite. If the target got its recipe from an implicit rule, this will be the first prerequisite added by the implicit rule (see Implicit Rules).
But, if I have the following:
a: b
a: c
#echo first prerequisite is: $<
This will print first prerequisite is c. While this makes sense, (as it would raise to many sharp sticks if b was considered the first prerequisite), I'm not seeing any documentation to support this, I'm wondering if I can rely on this being consistent among other make systems (The POSIX standard also does not seem to expand on this)
POSIX doesn't require that $< is available at all in explicit rules and there are versions of make that don't make it available. So using this is not portable in the first place.
From POSIX:
In an inference rule, the $< macro shall evaluate to the filename whose existence allowed the inference rule to be chosen for the target ... The meaning of the $< macro shall be otherwise unspecified.
Emphasis added. "Inference rule" is the same thing as GNU make's implicit rules (technically it's just suffix rules since POSIX doesn't define pattern rules).
As far as GNU make goes, $< is always the first prerequisite in the rule (by which I mean, the rule that contains the recipe). This is definitely guaranteed and I think there should be text alluding to that in the GNU make manual, but I didn't go look.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am taking a Compiler Design course in my undergraduate studies. As a part of the learning process, I'd have to develop the compiler for a language.
Can a compiler be written for Bash?
Would it be more difficult than designing a compiler for a regular programming language, like C/C++ and thus outright inconceivable, at least for a newbie?
Can a compiler be written for Bash?
Yes. (Existence proof - shc.)
If yes, how?
That's the hard part.
POSIX shell languages are very different to typical programming languages because of the effects of things like backticks, variable substitution, quoting, and so on.
You could ignore this and implement a "bash like" language, either leaving out the difficult features, or treating them in a way that doesn't conform to POSIX behavior.
Then ... there is the problem of how to generate something that is executable. Again, that is possible (see above), but if your aim is to be faster than a regular shell then you need to do things like emulating the behavior of common Linux commands in the compiled code. That is a huge task.
I'm not saying this is a bad project, but you will need to do a lot of work, including:
finding, reading and (fully) understanding the POSIX shell specs
researching how to implement a parser that deals with POSIX idiosyncracies
figuring out which linux commands need to be implemented directly, and
figuring out how to deal with the ones that you don't; e.g. all the complexity of pipelines.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
In Ruby, is there a preference for which level of parentheses to elide, or does it depend on the situation (in which case, what guidelines should be followed)? Sources are appreciated.
For example, is either
do_something do_something_else(...)
or
do_something(do_something_else ...)
better than the other?
You want a rule to decide when to omit parentheses and when not. And that should be based on the method. (It is cumbersome to base the rule depending on the context, i.e., always omit the innermost parentheses, or always omit the outermost parentheses., etc.)
And there are methods that are usually only used at the outermost level (i.e., do not become an argument of another method call), as opposed to no/few methods that only appear as the innermost level. Typical examples of the former are DSL methods (methods that are conventionally used without parentheses like puts, p can be considered parts of the DSL provided by Ruby itself).
Once you decide to base the rule on what the method is, it follows naturally that you would be omitting the outermost parentheses that appear with particular methods.
This is a primarily opinion-based question, but Ruby Style Guide is a good (best?) reference when style-related questions appear.
Assuming it should be consistent with rest of the assert in tests, and looking at way asserts are used in Rails tests (i.e. with no parenthesis), it will be apt to use
assert method(param1, param2, etc)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Is it because we want to conserve memory space?
Scott's answer is correct, but I want to pitch in with another perspective.
Scheme culture cares a great deal about functional programming. Functional code should not care about ordering of operations. This is, in fact, why an expression like (foo (bar) (baz (qux))) does not say anything about which order those functions will be run, except that:
qux will be run before baz is run
both bar and baz will be run before foo is run
In particular, qux can be run before or after bar, and even a sequence like qux → bar → baz → foo is valid.
For a similar reason, let should be used by default; it signals to the reader that the usual functional assumptions apply, that the bindings can be evaluated in any order. let* should only be used to alert the reader to the unusual behaviour of having bindings depend on previous ones in the same let* form.
let*, at the cost of being more capable, has to serialize the terms being defined (since each can depend on previous ones), whereas the terms in a let can be set up in any order (and even in parallel, should the architecture allow for that).
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
What is considered more appropriate style of writing conditional operators?
if(1){
puts("Hello")
}
or
if(1) puts("Hello")
Similar aspects of coding style are welcome too.
That's all depends on your preference, that's why we rarely see people code in the same style.. Moreover, it depends on which programming language you're using.. IMHO, the important thing in coding is code readability and comments, so when your BOSS asks other people to help or develop your code. He /she will spend the least amount of their time to understand your code..
If you ask specifically from your example above, I would prefer the first one.. Because in my OPINION, imagining the WHOLE code, that one will give better readability. HOWEVER, some people may argue that it will spend some of your time typing those brackets over and over..
As per the PSR standards any structure must always enclose the code in parentheses.
The body of each structure MUST be enclosed by braces. This standardizes how the structures look, and reduces the likelihood of introducing errors as new lines get added to the body.
from the official website
Please have a look under control structures section http://www.php-fig.org/psr/psr-2/
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Robert C. Martin offers in the fist chapter of his book 'Clean Code' several definitions of 'clean code' from differen well known software experts. How do you define clean code?
Easy to understand.
Easy to modify.
Easy to test.
Works correctly (Kent Beck's suggestion - very right).
These are the things that are important to me.
Code I'm not afraid to modify.
Code that doesn't require any comments to be easily understood.
Code which reads as close to a human language as possible. I mean it on all the levels: from syntax used, naming convention and alignment all the way to algorithms used, quality of comments and complexity of distribution of code between modules.
Simplest example for naming convention:
if (filename.contains("blah"))
versus
if (S_OK == strFN.find(0, "blah"))
Part of it depends on the environment/APIs used, but most of it is of course the responsibility of the developer
Point-free Haskell code. (Not really, though.)
Code in which the different modules or classes have clearly defined contracts, is a good start.
Code which doesn't break in multiple places when you make a single, seemingly insignificant change. It is also easy to follow the control path of the program.
Reusable code is also important. So not only important is the quality of the code, but where do you put.
Example, business logic into a Controller is a useless code