What are the other uses of the "make" command? - makefile

A sysadmin teacher told me one day that I should learn to use "make" because I could use it for a lot of other things that just triggering complilations.
I never got the chance to talk longer about it. Do you have any good example ?
As a bonus, isn't it this tool deprecated, and what are modern alternatives (for the compilation purpose and others) ?

One excellent thing make can be used for besides compilation is LaTeX. If you're doing any serious work with LaTeX, you'll find make very handy because of the need to re-interpret .tex files several times when using BibTex or tables of contents.
Make is definitely not deprecated. Although there are different ways of doing the same thing (batch files on Windows, shell scripts on Linux) make works the best, IMHO.

Make can be used to execute any commands you want to execute. It is best used for activities that require dependency checking, but there is no reason you couldn't use make to check your e-mail, reboot your servers, make backups, or anything else.
Ant, NAnt, and msbuild are supposedly the modern alternatives, but plain-old-make is still used extensively in environments that don't use Java or .NET.

isn't it this tool deprecated
What?! No, not even slightly. I'm on Linux so I realise I'm not an average person, but I use it almost daily. I'm sure there are thousands of Linux devs who do use it daily.

I remember seeing an article on Slashdot a few years ago describing a technique for optimising Linux boot sequence by using make.
edit:
Here's an article from IBM explaining the principle.

Make performs a topological sort, which is to say that given a bunch of things, and a set of requirements that one thing be before another thing, it finds a way to order all of the things so that all of the requirements are met. Building things (programs, documents, distribution tarballs, etc.) is one common use for topological sorting, but there are others. You can create a Makefile with one entry for every server in your data center, including dependencies between servers (NFS, NIS, DNS, etc.) and make can tell you what order in which to turn on your computers after a power outage, or what order to turn them off in before a power outage. You can use it to figure out what order in which to start services on a single server. You can use it to figure out what order to put your clothes on in the morning. Any problem where you need to find an order of a bunch of things or tasks that satisfies a bunch of specific requirements of the form A goes before B is a potential candidate for being solved with make.

The most random use I've ever seen is make being used in place of bash for init scripts on BCCD. It actually worked decently, once you got over the wtf moment....
Think of make as shell scripts with added oomph.

Well, I sure that the UNIX tool "make" is still being used a lot, even if it's waning in the .Net world. And while more people may be using MSBUILD, Ant, nAnt, and others tools these days, they are essentially just "make" with a different file syntax. The basic concept is the same.
Make tools are handy for anything where there's an input file which is processed into an output file. Write your reports in MSWord, but distribute them as PDFs? -- use make to generate the PDFs.

Configuration file changes through crontab, if needed.
I have examples for postfix maps, and for squid external tables.
Example for /etc/postfix/Makefile:
POSTMAP=/usr/sbin/postmap
POSTFIX=/usr/sbin/postfix
HASHES=transport access virtual canonical relocated annoying_senders
BTREES=clients_welcome
HASHES_DB=${HASHES:=.db}
BTREES_DB=${BTREES:=.db}
all: ${BTREES_DB} ${HASHES_DB} aliases.db
echo \= Done
${HASHES_DB}: %.db: %
echo . Rebuilding $< hash...
${POSTMAP} $<
${BTREES_DB}: %.db: %
echo . Rebuilding $< btree...
${POSTMAP} $<
aliases.db: aliases
echo . Rebuilding aliases...
/usr/bin/newaliases
etc

Related

Run several 'make' command at the same time?

(make noob here)
I did the following:
Configured a C++ project with CMake.
In one terminal tab, ran make to start building the whole project.
Got bored of waiting for the whole thing to build, figured I could
just make the subfolder I'm working on at the moment.
Without stopping the ongoing build in the first tab, opened a second tab and ran make from said subfolder.
Things looked pretty normal for a short while, then suddenly second tab started displaying build outputs related to the whole project, not only to the subfolder. I figured what I tried didn't work as I expected, so I CTRL-C'd the second tab.
That's when the weirdest happened: in the first tab the build output was mixed with lines coming form the specific folder I wanted to build. And the build went way past 100%, up to 128%!
My question is: what exactly does 'make' do when launched more than once at the same time?
Am I correct to think that the multiple make commands where somehow "merged" in the same process??
This is more a question about the makefiles that CMake creates and how they work. It's these makefiles which do things like track the percent complete, etc., not make itself. It's quite possible that by starting a second build in the same directory, you've messed up whatever facilities the CMake makefiles use to track progress.
The short answer is that no, it's not possible that one invocation of make will somehow "take over" or merge another invocation of make. As far as the make program is concerned they know nothing about each other. However since both are operating on the same filesystem, if one make writes files in a way that can confuse another make, you could see strange behaviors.
The cmake-generated makefiles are very complex; I've never actually tried to understand completely how they work. I've always thought it a shame that no one has tried to implement a CMake GNU Makefile generator in addition to the Unix Makefiles generator, that took full advantage of GNU make features. I'm sure the results would be easier to read and probably faster. But it seems unlikely this will ever happen; CMake users who care more about speed than portability are probably just switching to use Ninja as a generator.

Combine a set of shell scripts with internal dependencies into one?

I'm developing a set of shell scripts and for ease of development, the functions are often split across various files.
So the final binary scripts which I expect the end user to use require them to have all the relevant "library" scripts installed in the right location.
I am trying to find a way that allows me to develop the scripts with the same logical split in files, but then I can merge them all into a single binary script.
In the naive case, it would recursively go through all the sourced files and include them in the same file (similar to the pre-processing step in C compilers). The more involved version would also identify which functions are unused and trim them out.
Does anything like this exist? If not, I might consider writing it, but would be happy to hear about potential pitfalls that I should account for
I have seen this before, in Arch Linux' devtools repo. They use m4 to process .in files.
It's just templating though. And you might not need anything more.

Recommended tool to automate complicated build procedure

I am developing an OS for embedded devices that runs bytecode. Basically, a micro JVM.
In the process of doing so, I am able to compile and run Java applications to bytecode(ish) and flash that on, for instance, an Atmega1284P.
Now I've added support for C applications: I compile and process it using several tools and with some manual editing I eventually get bytecode that runs on my OS.
The process is very cumbersome and heavy and I would like to automate it.
Currently, I am using makefiles for automatic compilation and flashing of the Java applications & OS to devices.
All steps, roughly, for a C application are as follows and consist of consecutive manual steps:
(1) Use Docker to run a Linux container with lljvm that compiles a .c file to a .class file (see also https://github.com/davidar/lljvm/tree/master)
(2) convert this c.class file to a jasmin file (https://github.com/davidar/jasmin) using the ClassFileAnalyzer tool (http://classfileanalyzer.javaseiten.de/)
(3) manually edit this jasmin file in a text editor by replacing/adjusting some strings
(4) convert the modified jasmin file to a .class file again using jasmin
(5) put this .class file in a folder where the rest of my makefiles (the ones that already make and deploy the OS and class files from Java apps) can take over.
Current options seem to be just keep using makefiles but this is a bit unwieldly (I already have 5 different makefiles and this would further extend that chain). I've also read a bit about scons. In essence, I'm wondering which are some recommended tools or a good approach for complicated builds.
Hopefully this may help a bit, but the question as such could probably be a subject for a heated discussion without much helpful results.
As pointed out in the comments by others, you really need to automate the steps starting with your .c file to the point you can integrated it with the rest of your system.
There is generally nothing wrong with make and you would not win too much by switching to SCons. You'd get more ways to express what you want to do. Among other things meaning that if you wanted to write that automation directly inside the build system and its rules, you could also use Python and not only shell (should that be of a concern though, you could just as well call that Python code from make). But the essence of target, prerequisite, recipe is still there. And with that need for writing necessary automation for those .c to integration steps.
If you really wanted to look into alternative options. bazel might be of interest to you. The downside being the initial effort to write the necessary rules to fit your needs could be costly. And depending on size of your project, might just be too much. On the other hand once done with that, it'd be very easy to use (apply those rules on growing code base) and you could also ditch the container and rely on its more lightweight sand-boxing and external rules to get the tools and bits you need for your build... all with a single system for build description.

Should I push Makefile.in to git repository?

Using autotools as build system, should we ship Makefile.in (generated by automake) withing distribution? Running make dist puts Makefile.in in archive, so should I push Makefile.in to my git repo?
There is no definitive answer to this, just strongly-held opinions.
The traditional view -- and I think I am justified in calling it this, as it was the operative view where Autoconf and Automake were invented -- was that you should check in the generated files. The rationale for this was twofold.
First, it reduced dependencies for development: you could check out a project and run configure without needing to install autoconf and friends. This was especially important in the bad old pre-Linux days, when the tools weren't installed by default and when package managers were just a dream.
Second, because most source changes don't involve changes to the configury, this reduced a possible source of errors where different developers might have different versions of the tools installed.
The check-it-in approach essentially relies on the use of AM_MAINTAINER_MODE. In fact, this is why this mode was invented.
A different view eventually emerged, which was that such files should not be checked in. I think the rationale for this is also twofold.
First, it is cleaner. I'm sure one can find any number of exhortations saying that only editable files should be committed to source control. And, this makes sense -- derived files can be derived; in source control they are just clutter.
Second, it is not uncommon for the generated files to get out of date in the source tree. This happens because developers forget to enable maintainer mode. Checking in just the source files not only avoids this, but also lets other developers catch any possible bugs from this.
This approach pretty much requires avoiding AM_MAINTAINER_MODE.
To sum up, there is no right answer. Some people, in my observation, prefer one of the arguments above; but neither is truly conclusive, in the sense that both approaches have worked well for multiple serious projects over a very long period of time.

Make a ruby file unreadable to a user

Can I make a ruby file (e.g script.rb) unreadable to a user?
The file is on an Ubuntu (offline) machine. The user will use a local Sinatra app that will use some ruby files. I don't want the user to see the code in some of those files.
How can I do it?
EDIT:
Can I setup the project in a way that the user will be able to start the app but won't have access to specific files in it?
Thanks
Does that correspond to what you are searching for ?
chmod yourfile.rb 711
As I said in my comment it is literally almost impossible to hide the content of your ruby source file, many people try this in many different ways but it is almost always trivial to reverse engineer. There are some "suggestions" for making your code hidden but they never really work still, here are a few;
Obfuscation - The process of making your code executable but unreadable, using a tool like ProGuard for Java (there are ones for most major languages) will try to make your code a mess, and as unreadable as possible while still maintaining execution speed. Normally this consists of renaming variables, using strange characters and generally hiding, moving or wrapping functions in complicated structures.
Package the interpreter - You can use a tool like ocra to package the script up inside an executable with the interpreter and standard library, but anyone with even a tiny bit of experience in reverse engineering will be able to easily tear out the source code given a small amount of time
Write a custom interpreter - Now we are getting somewhere with making it harder. Writing a custom interpreter will allow you to compile your script to a "bytecode" that can then be executed. This is of course a very time consuming, expensive and incompatible solution when it comes to working with other code bases.
Write most of your code in C and then call out to it via extensions - Again this mostly moves the problem but its still there. It will take more time but anyone can easily pull apart the machine code of the C library you load in and bob is your uncle they have the source code.
Many more alternatives - This isn't a comprehensive list, I am probably missing a few ideas or suggestions.
As far as it goes making code unreadable is hard a better solution might just to be consider providing a licence agreement with your code. That way, someone reads or modifies the source file you can take them to court for a legal settlement.
Extract your code and its functionality to an external API. And then provide it as a service. This way you don't have to expose your source code to your 'users'.

Resources