Under FreeBSD, for some odd reason every time I execute a simple Makefile, it tries to create an obj directory under the current PWD.
I thought it might have to do with the default .OBJDIR for FreeBSD make, but setting .OBJDIR: ./ does not change this behavior.
This causes me problems down the line because it conflicts with SConstruct, but I managed to work around that by removing all read/write permissions for ./obj, however I still want to know why every time I run make, it tries to create the ./obj directory if it doesn't exist.
make does not create any ${.OBJDIR} automatically -- something in your Makefiles must be explicitly creating that (or trying to).
Could you post the makefile and/or the output of a make-run?
You can affect the value of the variable by setting MAKEOBJDIRPREFIX or MAKEOBJDIR in the environment or on command-line (neither can be set in /etc/make.conf).
See make(1) for more details.
Related
I'm learning CMake and struggling with learning the: add_custom_command function.
I'm sorry if this is a basic question but the online documentation didn't help me much.
For this snippet:
add_executable (creator creator.cpp)
get_target_property (creator EXE_LOC LOCATION) # get creator.cpp location
add_custom_command(
OUTPUT ./created.cpp # creates 'created.cpp' at the specified path
DEPENDS creator # specifies files on which the command depends
COMMAND ${EXE_LOC} # executes this command
ARGS ./created.cpp
)
add_executable(FOO ./created.c)
I can intuitively realise what's going on there, however, I do not understand why each instruction from the body of add_custom_command is needed. Here is how I understand it (please correct me where I'm wrong):
Executable creator is created in the current working dir using creator.cpp
EXE_LOC variable is used to store the path of the created executable
add_custom_command:
OUTPUT specifies that a created.cpp file will be created in the current
working directory.
DEPENDS: specifies that this newly created .cpp file depends on the
previously created executable. But why do we need to specify this? Is it
mandatory to do so and if not what happens if I don't specify this
COMMAND: ${EXE_LOC}: This I don't understand. I assume when the script
reaches this point some sort of command will be executed. But what exactly is going to get executed here? ./creator maybe? The documentation specifies that:
If COMMAND specifies an executable target (created by ADD_EXECUTABLE) it will automatically be replaced by the location of the executable created at build time.
But I don't really understand this.
ARGS: I don't understand what this is supposed to do and why do we need it.
It really confuses me that we pass the newly created file as an argument to a command whose purpose is to create that particular file.
Please clear this for me if possible.
Thank you for reading my long post.
It is convinient to think about COMMAND option as a command line:
First, you need to type the path to the program to be executed. In given case it is a path to creator executable. Then, you type arguments for given command. In given case, the only argument is a path to the created.cpp file. This is what will be executed:
<path-to-creator-executable> <path-to-cpp-file>
Option
DEPENDS creator
in given case is perfectly described in CMake documentation for add_custom_command:
If DEPENDS specifies any target (created by the add_custom_target(), add_executable(), or add_library() command) a target-level dependency is created to make sure the target is built before any target using this custom command. Additionally, if the target is an executable or library a file-level dependency is created to cause the custom command to re-run whenever the target is recompiled.
In short, this means that before creator executable will be run as specified in the COMMAND option, the executable will be created and updated if needed.
I am learning my way through shell scripts and just created one with vim.
Every file I create with .sh extension seems to be defaulting to read-only mode for every group in the ls -l command. I have tried creating files with several editors and in several locations and always get the same result.
So my question is, I know i can chmod the files so i can execute them, but is there something i can do to create them executable already and not change every single one of them?
As with any UNIX-like system, file creation is affected by the umask, which masks out file access bits when files are created.
You can change the default umask by editing the shell start-up configuration files, however I wouldn't recommend doing that.
If you want to change it so that all files you create have the executable bit set by default, then what about files that are not executable? I have always worked on shell scripts with the edit, chmod, run cycle and I don't feel it's a big problem.
there are C files in a directory and I have a makefile.
I usually use makefile to compile.
I have been wandering the role of the 'make clean'
'make clean' is just to remove files.
Though I didn't use 'make clean', t
he error and warning was shown up when there were something wrong.
I cannot realize why I need to use 'make clean' whenever I change the source file.
make is a utility is to determine automatically which pieces of a large program need to be recompiled, and issue the commands to recompile them.
To prepare to use make, you must write a file called the makefile that describes the relationships among files in your program, and the states the commands for updating each file.
Once a suitable makefile exists, each time you change some source files, this simple shell command:
make
suffices to perform all necessary recompilations. The make program uses the makefile data base and the last-modification times of the files to decide which of the files need to be updated.
We generally use make clean as a generic way to tell clean up the code.ie; remove all the compiled object files from the source code. You can name it as anything you like.
It's convention only. The convention is that clean will return you to a state where all you have is the "source" files. In other words, it gets rid of everything that can be built from something else (objects, executables, listings and so on).
So make clean ; make is expected to build everything from scratch. And, in fact, you'll often find a rule like:
rebuild: clean all
which will do both steps for you.
You should never have to do a clean unless you're wanting to (for example) copy just the source files somewhere. If you have to do so after editing a file, then your Makefile is not set up correctly.
And, if you make and get an error, you should get exactly the same error if you subsequently make without fixing said error.
GNU make automatically removes intermediate files created by implicit rules, by calling rm filename at the end. This obviously doesn't work if one of the targets was actually a directory. Take the following example:
.PHONY: all
all: test.target
%.target: tempdir.%
touch $#
tempdir.%:
mkdir -p $#
make -n reveals the action plan:
mkdir -p tempdir.test
touch test.target
rm tempdir.test
Is it possible to get GNU make to correctly dispose of intermediate directories? Perhaps by changing rm to rm -rf?
There is no way to make this happen. Although GNU make prints the command "rm", really internally it's running the unlink(2) system call directly and not invoking a shell command. There is no way to configure or modify the command that GNU make runs (except by changing the source code of course).
However, I feel I should point out that it's just not going to work to use a directory as a normal prerequisite of a target. GNU make uses time-last-modified comparison to tell when targets are up to date or not, and the time-last-modified of a directory does not follow the standard rules. The TLM of a directory is updated every time a file (or subdirectory) in that directory is created, deleted, or renamed. This means you will created the directory, then have a bunch of files that depend on it: the first one is built and has timestamp N. The last one is built and has timestamp N+x. That also sets the directory's timestamp to N+x. Then the next time you run make, it will notice that the first one has an older timestamp (N) than one of its prerequisites (the directory, at N+x), and rebuild.
And this will happen forever, until it can build the remaining "out of date" prerequisites fast enough that their timestamp is not newer than the directory.
And, if you were to drop a temporary file or editor backup file or something in that directory, it would start all over again.
Just don't do it.
Some people use an explicit shell command to create directories. Some people create them as a side-effect of the target creation. Some people use order-only prerequisites to ensure they're created on time.
I've always mentally regarded the current directory as something for users, not scripts, since it is dependent on the user's location and can be different each time the script is executed.
So when I came across the Java jar utility's -C option I was a little puzzled.
For those who don't know the -C option is used before specifying a file/folder to include in a jar. Since the path to the file/folder is replicated in the jar, the -C option changes directories before including the file:
in other words:
jar -C flower lily.class
will make a jar containing the lily.class file, whereas:
jar flower/lily.class
will make a flower folder in the jar which contains lily.class
For a jar-ing script I'm making I want to use Bourne wild-cards folder/* but that would make using -C impossible since it only applies to the next immediate argument.
So the only way to use wild-cards is run from the current directory; but I still feel uneasy towards changing and using the current directory in a script.
Is there any downside to using the current directory in scripts? Is it frowned upon for some reason perhaps?
I don't think there's anything inherently wrong with changing the current directory from a shell script. Certainly it won't cause anything bad to happen, if taken by itself.
In fact, I have a standard script that I use for starting up a Java-based server, and the very first line is:
cd `dirname $0`
This ensures that the rest of the commands in the script are executed in the directory that contains the script file itself (useful when a single machine is hosting multiple server instances), regardless of where the shell script was actually invoked from. Without changing the current directory in the script, it would only work correctly if the user remember to manually cd into the corresponding directory before running the script.
In this case, performing the cd operation from within the script removes a manual step from the server startup/shutdown process, and makes things slightly less error-prone as a result.
So as with most things, there are legitimate uses for this sort of thing. And I'm sure there are also some questionable ones, as well. It really depends upon what's most appropriate for your specific use-case. Which is something I can't really comment on...I always just let maven build my JAR's for me.