How can I interleave two lists in a makefile? - makefile

In one place in a makfile I need
a.ml a.mli b.ml b.mli c.ml c.mli etc.
I another place, I need
a.mli b.mli c.mli etc.
Without duplication, can I define two separate, equal length lists (one of the .ml files and one of the .mli files), and then define another list to be the interleaving of the two lists?
In fact, since there is always a .ml and a corresponding .mli, can I generate all this from just a list of filenames with no extensions (i.e a b c etc.)?

There are several ways to do this. This is probably the most general:
LIST := a b c
MLLIST := $(addsuffix .ml,$(LIST))
MLILIST := $(addsuffix .mli,$(LIST))
both = $(1).ml $(1).mli
BOTHLIST := $(foreach x,$(LIST),$(call both,$(x)))

If you get a copy of the Make Standard Library: http://gmsl.sourceforge.net/ you can use the pairmap function, which I believe does what you're looking for.

Related

count elements of list,using pattern

I have some questions, can anybody show how to solve them?
1)How to count elements of list specific to level, independently of sort Exrp?
Just like the number of elements subset.
for example {{1,2,3,4,5},{5,6,7,8,9}} , at level 1 it should yield 10.
I ve been trying to done this by Count[] , but it dont work so even if i choose pattern _ (i can count them separately specifying pattern, but i cant use multi-pattern (see below) , if i specifying _ then it count some lists above target level).
2)How i can assert something like NumberQ or EvenQ to content of list using list as function argument (ie list must contain specific mathematica expr. ) and how to create multi pattern like( f[x_List?NumberQ or EvenQ,y_List?NumberQ or EvenQ]
Thanks.
The first question has already been answered in comments.
For the second version, -Q functions are not automatically threaded
and combined (as there are several ways to combine tests on a list).
You have to do explicitely.
Here is one way to do it:
f[x_List,y_List]:= Join[x,y] /; (And##Map[EvenQ,x] && And##Map[EvenQ,y])
This syntax defines how to compute f if the conditions on the right are satisfied. f will remain unevaluated if at least one test does not yields true.
If the kind of test you like comes often, you can define auxiliary functions:
test if a list contains only even numbers
evenlistQ[x_List]:= And##Map[EvenQ, x]
test if a list contains only items verifying both test_A and test_B
AandBlistQ[x_List]:= And##Map[testA[#]&&testB[#]&, x]
test if a list contains only items verifying at least one of test_A or test_B
AorBlistQ[x_List]:= And##Map[testA[#]||testB[#]&, x]
test if a list is either completely verifying test_A or completely verifying test_B
AlistOrBlistQ[x_List]:= (And##Map[testA,x] || And##Map[testB,x])
Then you could write
f[ x_List?evenlistQ, y_List?evenlistQ] := Join[x,y]
that would evaluate only if both arguments are lists of verifying your requirements and left unevaluated if not.
This last form is equivalent to
f[ x_List, y_List] := Join[x,y] /; (evenlistQ[x]&& evenlistQ[y])
which allows more flexibility for constraints verification.

Using make target name in generated prerequisite

All I am wanting to do something along the lines of...
some_file.out : ..... ver_$(basename $#).ver:
.....
The $# does not expand as expected but only in the rule header. Inside the body of the rule all uses of ver_$(basename $#).ver expand as desired. How would I modify this to make it work as desired?
You have ellipses eliding too many important parts of your example to provide a full solution. However, one option is to use static pattern rules, like this:
some_file.out : %.out : ver_%.ver
...
If that isn't sufficient you can use secondary expansion, but that's a bigger hammer.

Difference between ending a `define` with or without an equal mark in a Makefile?

In a makefile, what's the difference between writing
define VAR =
...
...
endef
and writing
define VAR
...
...
endef
Notice that the latter is missing the = on the define line. Both are accepted by Gnumake, but they don't appear to exhibit the same behavior (for me, I find that using the latter does what I want it to).
When should you use which form?
There is no difference between them: both create recursive multi-line variable values.
There is ONE difference, though: the former version (with the equals sign) was introduced in GNU make 3.82 (released in 2010). If you're still using a version of GNU make before that, then this statement:
define FOO =
bar
enddef
creates a variable named FOO =, not FOO (which is probably why it doesn't appear to work for you).
The ability to add assignment operators here is really so you can use the other operators, such as :=, ?=, += with multi-line variables, which you didn't used to be able to do.
But the default if no operator is specified is and always has been, to create a normal recursive variable.

Fixing Combinatorica redefinition of Element

My code relies on version of Element which works like MemberQ, but when I load Combinatorica, Element gets redefined to work like Part. What is the easiest way to fix this conflict? Specifically, what is the syntax to remove Combinatorica's definition from DownValues? Here's what I get for DownValues[Element]
{HoldPattern[
Combinatorica`Private`a_List \[Element] \
{Combinatorica`Private`index___}] :>
Combinatorica`Private`a[[Combinatorica`Private`index]],
HoldPattern[Private`x_ \[Element] Private`list_List] :>
MemberQ[Private`list, Private`x]}
If your goal is to prevent Combinatorica from installing the definition in the first place, you can achieve this result by loading the package for the first time thus:
Block[{Element}, Needs["Combinatorica`"]]
However, this will almost certainly make any Combinatorica features that depend upon the definition fail (which may or may not be of concern in your particular application).
You can do several things. Let us introduce a convenience function
ClearAll[redef];
SetAttributes[redef, HoldRest];
redef[f_, code_] := (Unprotect[f]; code; Protect[f])
If you are sure about the order of definitions, you can do something like
redef[Element, DownValues[Element] = Rest[DownValues[Element]]]
If you want to delete definitions based on the context, you can do something like this:
redef[Element, DownValues[Element] =
DeleteCases[DownValues[Element],
rule_ /; Cases[rule, x_Symbol /; (StringSplit[Context[x], "`"][[1]] ===
"Combinatorica"), Infinity, Heads -> True] =!= {}]]
You can also use a softer way - reorder definitions rather than delete:
redef[Element, DownValues[Element] = RotateRight[DownValues[Element]]]
There are many other ways of dealing with this problem. Another one (which I already recommended) is to use UpValues, if this is suitable. The last one I want to mention here is to make a kind of custom dynamic scoping construct based on Block, and wrap it around your code. I personally find it the safest variant, in case if you want strictly your definition to apply (because it does not care about the order in which various definitions could have been created - it removes all of them and adds just yours). It is also safer in that outside those places where you want your definitions to apply (by "places" I mean parts of the evaluation stack), other definitions will still apply, so this seems to be the least intrusive way. Here is how it may look:
elementDef[] := Element[x_, list_List] := MemberQ[list, x];
ClearAll[elemExec];
SetAttributes[elemExec, HoldAll];
elemExec[code_] := Block[{Element}, elementDef[]; code];
Example of use:
In[10]:= elemExec[Element[1,{1,2,3}]]
Out[10]= True
Edit:
If you need to automate the use of Block, here is an example package to show one way how this can be done:
BeginPackage["Test`"]
var;
f1;
f2;
Begin["`Private`"];
(* Implementations of your functions *)
var = 1;
f1[x_, y_List] := If[Element[x, y], x^2];
f2[x_, y_List] := If[Element[x, y], x^3];
elementDef[] := Element[x_, list_List] := MemberQ[list, x];
(* The following part of the package is defined at the start and you don't
touch it any more, when adding new functions to the package *)
mainContext = StringReplace[Context[], x__ ~~ "Private`" :> x];
SetAttributes[elemExec, HoldAll];
elemExec[code_] := Block[{Element}, elementDef[]; code];
postprocessDefs[context_String] :=
Map[
ToExpression[#, StandardForm,
Function[sym,DownValues[sym] =
DownValues[sym] /.
Verbatim[RuleDelayed][lhs_,rhs_] :> (lhs :> elemExec[rhs])]] &,
Select[Names[context <> "*"], ToExpression[#, StandardForm, DownValues] =!= {} &]];
postprocessDefs[mainContext];
End[]
EndPackage[]
You can load the package and look at the DownValues for f1 and f2, for example:
In[17]:= DownValues[f1]
Out[17]= {HoldPattern[f1[Test`Private`x_,Test`Private`y_List]]:>
Test`Private`elemExec[If[Test`Private`x\[Element]Test`Private`y,Test`Private`x^2]]}
The same scheme will also work for functions not in the same package. In fact, you could separate
the bottom part (code-processing package) to be a package on its own, import it into any other
package where you want to inject Block into your functions' definitions, and then just call something like postprocessDefs[mainContext], as above. You could make the function which makes definitions inside Block (elementDef here) to be an extra parameter to a generalized version of elemExec, which would make this approach more modular and reusable.
If you want to be more selective about the functions where you want to inject Block, this can also be done in various ways. In fact, the whole Block-injection scheme can be made cleaner then, but it will require slightly more care when implementing each function, while the above approach is completely automatic. I can post the code which will illustrate this, if needed.
One more thing: for the less intrusive nature of this method you pay a price - dynamic scope (Block) is usually harder to control than lexically-scoped constructs. So, you must know exactly the parts of evaluation stack where you want that to apply. For example, I would hesitate to inject Block into a definition of a higher order function, which takes some functions as parameters, since those functions may come from code that assumes other definitions (like for example Combinatorica` functions relying on overloaded Element). This is not a big problem, just requires care.
The bottom line of this seems to be: try to avoid overloading built-ins if at all possible. In this case you faced this definitions clash yourself, but it would be even worse if the one who faces this problem is a user of your package (may be yourself a few months later), who wants to combine your package with another one (which happens to overload same system functions as yours). Of course, it also depends on who will be the users of your package - only yourself or potentially others as well. But in terms of design, and in the long term, you may be better off assuming the latter scenario from the start.
To remove Combinatorica's definition, use Unset or the equivalent form =.. The pattern to unset you can grab from the Information output you show in the question:
Unprotect[Element];
Element[a_List, {index___}] =.
Protect[Element];
The worry would be, of course, that Combinatorica depends internally on this ill-conceived redefinition, but you have reason to believe this to not be the case as the Information output from the redefined Element says:
The use of the function
Element in Combinatorica is now
obsolete, though the function call
Element[a, p] still gives the pth
element of nested list a, where p is a
list of indices.
HTH
I propose an entirely different approach than removing Element from DownValues. Simply use the full name of the Element function.
So, if the original is
System`Element[]
the default is now
Combinatorica`Element[]
because of loading the Combinatorica Package.
Just explicitly use
System`Element[]
wherever you need it. Of course check that System is the correct Context using the Context function:
Context[Element]
This approach ensures several things:
The Combinatorica Package will still work in your notebook, even if the Combinatorica Package is updated in the future
You wont have to redefine the Element function, as some have suggested
You can use the Combinatorica`Element function when needed
The only downside is having to explicitly write it every time.

When using exuberant-ctags what options to you use?

Using exuberant-ctags 5.8 for gcc 4.4.3 c89
I am just started using exuberant-ctags and I am just wondering what options do you add.
Here is a list and I am just wondering added too many could it be over kill.
$ ctags --list-kinds=c
c classes
d macro definitions
e enumerators (values inside an enumeration)
f function definitions
g enumeration names
l local variables [off]
m class, struct, and union members
n namespaces
p function prototypes [off]
s structure names
t typedefs
u union names
v variable definitions
x external and forward variable declarations [off]
I was going to use the following:
ctags -e --c-kinds=+defgpstux -R
I am just wondering: is that overkill?
c classes No -- I don't have any classes as this is c
d macro definitions YES -- I have many macros
e enumerators (values inside an enumeration) YES
f function definitions YES
g enumeration names YES
l local variables [off] NO
m class, struct, and union members NO
n namespaces NO
p function prototypes [off] YES
s structure names YES -- Is there any difference with m
t typedefs YES
u union names YES
v variable definitions NO
x external and forward variable declarations [off] YES
I wouldn't say it is overkill, I would turn on m though (structs and union member searching is very good)
Ctags in general is good if you are working from the command line or with an editor that supports it (gvim for example). If you really want advanced features I'd recomend going for a good IDE. There are somethings you simply can't do directly with ctags (such as call hireachy, or refactoring which a good IDE with good C/C++ indexing support will give you)
I don't think any of these are overkill, however you might want to investigate CScope to 'take it to the next level'. It seems like you might be squeezing the maximum you'll be able to get out of ctags and thats where CScope picks up.

Resources