I'm working on trying to fix/redo the makefile(s) for a legacy system, and I keep banging my head against some things. This is a huge software system, consisting of numerous executables, shared objects, and libraries, using C, Fortran, and a couple of other compilers (such as Motif's UIL compiler). I'm trying to avoid recursive make and I would prefer "one makefile to build them all" rather than the existing "one makefile per executable/.so/.a" idea. (We call each executable, shared object, library, et al a "task," so permit me to use that term as I go forward.)
The system is scattered across tons of different source subdirectories, with includes usually in one of 6 main directories, though not always.
What would be the best way to deal with the plethora of source & target directories? I would prefer including a Task.mk file for each task that would provide the task-specific details, but target-specific variables don't give me enough control. For example, they don't allow me to change the source & target directories easily, at least from what I've tried.
Some things I cannot (i.e., not allowed to) do include:
Change the structure of the project. There's too much that would need to be changed everywhere to pull that off.
Use a different make. The official configuration (which our customer has guaranteed, so we don't need to deal with unknown configurations) uses gmake 3.81, period.
Related
at work, we are using docker and docker-compose, our developers need to start many containers locally and import a large database, there are many services that need to run together for development to be successful and easy.
so we sort of define reusable functions as make commands to make the code easier to maintain, is there another way to define and reuse many shell commands better than make.
for us due to network limitations running docker locally is the only option.
we managed to solve this challenge and make our developers' life easier by abstracting away complex shell commands behind multiple make targets, and in order to split these numerous make targets that control our docker infrastructure and containers we decided to split the targets among many files with .mk extension.
there are multiple make commands, like 40 of them, some of them are low level, some are meant to be called by developers to do certain tasks.
make launch_app
make import_db_script
make build_docker_images
but lately things are starting to become a little slow, with make commands calling other make commands internally, each make call is taking significant amount of time, since each lower level make call has to go through all defined .mk files, and do some calculations, as it shows when we run make -d, so it starts to add up to a considerable overhead.
is there any way to manage a set of complex shell commands using anything other than make, while still being easy for our developers to call.
thanks in advance.
Well, you could always just write your shell commands in a shell script instead of a makefile. Using shell functions, shell variables, etc. it can be managed. You don't give examples of how complex your use of make constructs is.
StackOverflow is not really a place to ask open-ended questions like "what's the best XYZ". So instead I'll treat this question as, "how can I speed up my makefiles".
To me it sounds like you just have poorly written makefiles. Again, you don't show any examples but it sounds like your rules are invoking lots of sub-makes (e.g., your rule recipes run $(MAKE) etc.) That means lots of processes invoked, lots of makefiles parsed, etc. Why don't you just have a single instance of make and use prerequisites, instead of sub-makes, to run other targets? You can still split the makefiles up into separate files then use include ... to gather them all into a single instance of make.
Also, if you don't need to rebuild the makefiles themselves you should be sure to disable the built-in rules that might try to do that. In fact, if you are just using make to run docker stuff you can disable all the built-in rules and speed things up a good bit. Just add this to your makefile:
MAKEFLAGS += -r
(see Options Summary for details of this option).
ETA
You don't say what version of GNU make you're using, or what operating system you're running on. You don't show any examples of the recipes you're using so we can see how they are structured.
The problem is that your issue, "things are slow", is not actionable, or even defined. As an example, the software I work on every day has 41 makefiles containing 22,500 lines (generated from cmake, which means they are not as efficient as they could be: they are generic makefiles and not using GNU make features). The time it takes for my build to run when there is nothing to actually do (so, basically the entire time is taken by parsing the makefiles), is 0.35 seconds.
In your comments you suggest you have 10 makefiles and 50 variables... I can't imagine how any detectable slowness could be caused by this size of makefile. I'm not surprised, given this information, that -r didn't make much difference.
So, there must be something about your particular makefiles which is causing the slowness: the slowness is not inherent in make. Obviously we cannot just guess what that might be. You will have to investigate this.
Use time make launch_app. How long does that take?
Now use time make -n launch_app. This will read all makefiles but not actually run any commands. How long does that take?
If make -n takes no discernible time then the issue is not with make, but rather with the recipes you've written and switching to a different tool to run those same recipes won't help.
If make -n takes a noticeable amount of time then something in your makefiles is slow. You should examine it for uses of $(shell ...) and possibly $(wildcard ...); those are where the slowness will happen. You can add $(info ...) statements around them to get output before and after they run: maybe they're running lots of times unexpectedly.
Without specific examples of things that are slow, there's nothing else we can do to help.
I am new to COBOL and AS/400 IBM iSeries world, and am interested in best practices used in this community.
How do people perform batch compilation of all COBOL members, or multiple members at once?
I see that AS/400 source files are organized in libraries, file
objects, and members. How do you detect which file members are COBOL src
code to be compiled? Can you get the member file type (which should be CBL
in this case) via some qshell command?
Thanks,
Akku
Common PDM manual method:
Probably the simplest and most widely used method would be to use PDM (Program Development Manager) in a "green-screen" (5250-emulation) session. This allows you to manually select every program you wish to compile. It may not be quite the answer you were looking for, but it may be the most widely used, due to its simple approach, and leaving decisions in the developer's hands.
People commonly use the Start PDM command STRPDM, which provides a menu where you can select an option to work with lists of Libraries, Objects (Files), or Members. (Personally, I prefer to use the corresponding commands directly, WRKLIBPDM, WRKOBJPDM, or WRKMBRPDM.) At each of these levels you can filter the list by pressing F17 (shift F5).
F18 (shift F6) allows you to set the option to Compile in Batch. This means that each individual compile will be submitted to a job queue, to compile in its own job. You can also specify which job description you would like to use, which will determine which job queue the jobs are placed on. Some job queues may be single threaded, while others might run multiple jobs at once. You can custom-define your own PDM options with F16.
If you chose to start at the library level, you can enter option 12 next to each library you wish to work with its objects (source files).
At the object level, you would want to view only objects of type *FILE, and attribute 'PF-SRC' (or concievably 'PF38-SRC'). You can then enter option 12 beside any source file you wish to work with its members.
At the member level, you might want to filter to type *CBL* because (depending on how things have been done on your system) COBOL members could be CBL, CBLLE, SQLCBL, SQLCBLE, or even System/38 or /36 variants. Type option 14 (or a custom-defined option) next to each member you wish to compile. You can repeat an option down the list with F13 (shift F1).
This method uses manual selection, and does not automatically select ALL of your COBOL programs to be compliled. But it does allow you to submit large numbers of compiles at a time, and uses programmer discretion to determine which members to select, and what options to use.
Many (if not most) developers on IBM i are generally not very familiar with qshell. Most of us write automation scripts in CL. A few renegades like myself may also use REXX, but sadly this is rare. It's not too often that we would want to re-compile all programs in the system. Generally we only compile programs that we are working with, or select only those affected by some file change.
Compiling everything might not be a simple problem. Remember some libraries or source files might simply be archival copies of old source, which you might not really want to compile, or that might not compile successfully anymore. You would want to distinguish which members are COBOL copybooks, rather than programs. With ILE, you would want to distinguish which members should be compiled as programs, modules, or service programs. You may need to compile modules before compiling programs that bind with them. Those modules might not necessarily have been written in COBOL, or COBOL modules might be bound into ILE programs in other languages, perhaps CL or RPG.
So how would a full system recompile be automated in a CL program? You could get a list of all source files on they system with DSPOBJD *ALL/*ALL *FILE OUTPUT(*FILE) OUTFILE( ___ ). The output file contains a file attribute column to distinguish which objects are source files. Your CL program could read this, and for each source file, it could generate a file of detailed member information with DSPFD &lib/&file TYPE(*MBR) OUTPUT(*FILE) OUTFILE( ___ ). That file contains member type information, which could help you determine which members were COBOL. From there you could RTVOBJD to figure out whether it was a program, module, and/or service program.
You may also need to know the options for how individual programs, modules, or service programs were compiled. I often solve this by creating a source file, which I generally call BUILD, with a member for each object that needs special handling. This member could be CL, but I often use REXX. In fact I might be tempted to do the whole thing in REXX for its power as a dynamic interpreted language. But that's just me.
I know that when linking to multiple static libraries or object files, the order matters (dependent libraries should be listed before their dependencies). I want to know if, when creating a library file with ar, this same rule applies and the order within the library matters, or within the same .a file it doesn't make a difference.
I am packing 200+ object files with a complicated dependency graph, and doing
ar rcs mylib.a objs/*.o
is considerably easier then listing them in the correct order.
I am using gcc, if it makes a difference.
The order within the library used to matter long time ago.
It no longer matters on any UNIX system newer than ~15-20 years. From man ranlib:
An archive with such an index speeds up linking to the library
and allows routines in the library to call each other without
regard to their placement in the archive.
Most non-ancient UNIX systems either produce the __.SYMDEF (which contains above index) automatically while building the archive library, or build it in-memory at link time.
I have worked with OpenCL on a couple of projects, but have always written the kernel as one (sometimes rather large) function. Now I am working on a more complex project and would like to share functions across several kernels.
But the examples I can find all show the kernel as a single file (very few even call secondary functions). It seems like it should be possible to use multiple files - clCreateProgramWithSource() accepts multiple strings (and combines them, I assume) - although pyopencl's Program() takes only a single source.
So I would like to hear from anyone with experience doing this:
Are there any problems associated with multiple source files?
Is the best workaround for pyopencl to simply concatenate files?
Is there any way to compile a library of functions (instead of passing in the library source with each kernel, even if not all are used)?
If it's necessary to pass in the library source every time, are unused functions discarded (no overhead)?
Any other best practices/suggestions?
Thanks.
I don't think OpenCL has a concept of multiple source files in a program - a program is one compilation unit. You can, however, use #include and pull in headers or other .cl files at compile time.
You can have multiple kernels in an OpenCL program - so, after one compilation, you can invoke any of the set of kernels compiled.
Any code not used - functions, or anything statically known to be unreachable - can be assumed to be eliminated during compilation, at some minor cost to compile time.
In OpenCL 1.2 you link different object files together.
I compiled my code using the make utility and got the binaries.
I compiled the code again with a few changes in makefile (-j inserted at some points) and got a slight difference in the binaries. The difference was reported by "beyond compare". To further check in, I compiled the code again without my changes in makefile and found that the binaries are still differing.
Why is it happening that the same code compiled at different times is resulting into slightly different (in size and content) binaries? How should if check if the changes i have made are legitimate and the binaries are the same logically?
Do ask me for any further explanation.
You haven't said what you're building (C, C++ etc) but I wouldn't be surprised if it's a timestamp.
You could find out the format for the binary type you're building (which will depend on your operating system) and see whether it makes sense for there to be a timestamp in the place which is changing.
It's probably easiest to do this on a tiny sample program which will produce a very small binary, to make it easier to work out what everything means.
ELF object files contain a timestamp for when they are compiled. Thus, you can expect a different object file each time you compile (on Linux or Solaris). You may find the same with other systems of object files too.