Rules of executing shell command in Makefile - shell

When I executed command make, I got an error message
Makefile:4: *** missing separator. Stop.
The command in Makefile is:
$(shell ./makejce common/jce jce)
What's wrong with it?
-------makejce---------
#!/bin/bash
FLAGS=""
local_protoc=""
dir0=`pwd`
dir=`pwd`
......
if [ $# -gt 1 ]
then
mkdir -p $2
cd $2
dir=`pwd`
cd $dir0
fi
cd $1
jce_dir=`pwd`
#sub dir
for d in `ls -d */`
do
if [ -d $d ]
then
cd $d
for f in `find . -name '*.jce'`
do
${local_protoc} ${FLAGS} --dir=${dir} $f
done
cd $jce_dir
fi
done
#current dir
for f in `ls *.jce`
do
${local_protoc} ${FLAGS} --dir=${dir} $f
done
cd $dir0
-----makefile------
......
$(shell ./makejce common/jce jce)
......

With so little info it looks extremely bizarre (why are you running all the build steps in a shell script then invoking that script with a shell makefile function? The entire point of a makefile is to manage the build steps...) but without more information I'll just answer your specific question:
The make shell function works like backticks or $(...) in shell scripts: that is it runs the command in a shell and expands to the stdout of the command.
In your makefile if you have:
$(shell echo hi)
then it runs the shell command echo hi and expands to the stdout (i.e., hi). Then make will attempt to interpret that as some makefile text, because that's where you have put the function invocation (on a line all by itself). That's a syntax error because make doesn't know what to do with the string hi.
If you want to run a shell function then either (a) redirect its output so it doesn't output anything:
$(shell ...command... >/dev/null 2>&1)
or (b) capture the output somewhere that it won't bother make, such as in a variable like this:
_dummy := $(shell ...command...)
(by using := here we ensure the shell function is evaluated when the makefile is parsed).

Related

Passing a path as an argument to a shell script

I've written bash script to open a file passed as an argument and write it into another file. But my script will work properly only if the file is in the current directory. Now I need to open and write the file that is not in the current directory also.
If compile is the name of my script, then ./compile next/123/file.txt should open the file.txt in the passed path. How can I do it?
#!/bin/sh
#FIRST SCRIPT
clear
echo "-----STARTING COMPILATION-----"
#echo $1
name=$1 # Copy the filename to name
find . -iname $name -maxdepth 1 -exec cp {} $name \;
new_file="tempwithfile.adb"
cp $name $new_file #copy the file to new_file
echo "compiling"
dir >filelist.txt
gcc writefile.c
run_file="run_file.txt"
echo $name > $run_file
./a.out
echo ""
echo "cleaning"
echo ""
make clean
make -f makefile
./semantizer -da <withfile.adb
Your code and your question are a bit messy and unclear.
It seems that you intended to find your file, given as a parameter to your script, but failed due to the maxdepth.
If you are given next/123/file.txt as an argument, your find gives you a warning:
find: warning: you have specified the -maxdepth option after a
non-option argument -iname, but options are not positional (-maxdepth
affects tests specified before it as well as those specified after
it). Please specify options before other arguments.
Also -maxdepth gives you the depth find will go to find your file until it quits. next/123/file.txt has a depth of 2 directories.
Also you are trying to copy the given file within find, but also copied it using cp afterwards.
As said, your code is really messy and I don't know what you are trying to do. I will gladly help, if you could elaborate :).
There are some questions that are open:
Why do you have to find the file, if you already know its path? Do you always have the whole path given as an argument? Or only part of the path? Only the basename ?
Do you simply want to copy a file to another location?
What does your writefile.c do? Does it write the content of your file to another? cp does that already.
I also recommend using variables with CAPITALIZED letters and checking the exit status of used commands like cp and find, to check if these failed.
Anyway, here is my script that might help you:
#!/bin/sh
#FIRST SCRIPT
clear
echo "-----STARTING COMPILATION-----"
echo "FILE: $1"
[ $# -ne 1 ] && echo "Usage: $0 <file>" 1>&2 && exit 1
FILE="$1" # Copy the filename to name
FILE_NEW="tempwithfile.adb"
cp "$FILE" "$FILE_NEW" # Copy the file to new_file
[ $? -ne 0 ] && exit 2
echo
echo "----[ COMPILING ]----"
echo
dir &> filelist.txt # list directory contents and write to filelist.txt
gcc writefile.c # ???
FILE_RUN="run_file.txt"
echo "$FILE" > "$FILE_RUN"
./a.out
echo
echo "----[ CLEANING ]----"
echo
make clean
make -f makefile
./semantizer -da < withfile.adb

One child process per for loop in make?

Let me first write a quick Makefile as a showcase:
#!/bin/make -f
folders := $(shell find -mindepth 1 -maxdepth 1 -type d -print)
make_dir:
#mkdir -p "test0"
pwd_test:
#cd "test0" && pwd
#pwd
pwd_all:
#for f in $(folders); do \
cd "$${f}" && pwd; \
pwd; \
cd ..; \
done
First do make make_dir and then see the different results:
➜ so make pwd_test
/data/cache/tmp/so/test0
/data/cache/tmp/so
➜ so make pwd_all
/data/cache/tmp/so/test0
/data/cache/tmp/so/test0
You see that in the for loop it is necessary to do cd ... Apparently, now there is no child process spawn for the cd X && pwd command, while that is normally the case. Is this behaviour specific to make or specific to my shell?
Make spawns a new process for each command in the rule. Since the for loop is one command you get only one process.
Take a look at Recipe Execution
Edit:
Each line in a makefile gets it own subshell. Commands that have
\ tells make that the next line should be part of the current line.
The reason the for loop get its own subshell is because make see the line as
#for f in $(folders); do cd "$${f}" && pwd; pwd; cd ..; done
MadScientist explains it fairly well. Any command that you can type in your
shell in one line will be executed by make in one subshell or process.
If you were to run this in ksh, ksh would be passed
for f in $(folders); do cd "$${f}" && pwd; pwd; cd ..; done and it would be
run in that one subshell. If ksh did not have a for loop implemented this
probably would error and make would say the command returned some error code.
Explanation of pwd_test
pwd_test:
#cd "test0" && pwd
#pwd
#cd "test0" && pwd is seen as one line so the subshell updates its current
working directory and then prints out what the current working is.
#pwd At this line make spawns a new subshell that contains the old working
directory (or the directory make was called form) and pwd prints that
directory.

Getting directory portion of a list of source files in a for loop

I have the following gnu make script:
for hdrfile in $(_PUBLIC_HEADERS) ; do \
echo $(dir $$hdrfile) ; \
done
The _PUBLIC_HEADERS variable has a list of relative paths, like so:
./subdir/myheader1.h
./subdir/myheader2.h
The output I get from the for loop above is:
./
./
I expect to see:
./subdir/
./subdir/
What am I doing wrong? Note that if I change the code to:
echo $(dir ./subdir/myheader1.h)
it works in this case. I think maybe it has something to do with the for loop but I'm not sure.
You are confusing make variables (or functions) with shell variables when executing the for-loop. Note that $(dir ...) is a make construct that gets expanded by make before the command is executed by the shell. However, you want the shell to execute that command inside the loop.
What you could do is replace $(dir) with the corresponding command dirname which gets executed by the shell. So it becomes:
for hdrfile in $(_PUBLIC_HEADERS) ; do \
dirname $$hdrfile ; \
done
This should give the desired result.

Quick bash script to run a script in a specified folder?

I am attempting to write a bash script that changes directory and then runs an existing script in the new working directory.
This is what I have so far:
#!/bin/bash
cd /path/to/a/folder
./scriptname
scriptname is an executable file that exists in /path/to/a/folder - and (needless to say), I do have permission to run that script.
However, when I run this mind numbingly simple script (above), I get the response:
scriptname: No such file or directory
What am I missing?! the commands work as expected when entered at the CLI, so I am at a loss to explain the error message. How do I fix this?
Looking at your script makes me think that the script you want to launch a script which is locate in the initial directory. Since you change you directory before executing it won't work.
I suggest the following modified script:
#!/bin/bash
SCRIPT_DIR=$PWD
cd /path/to/a/folder
$SCRIPT_DIR/scriptname
cd /path/to/a/folder
pwd
ls
./scriptname
which'll show you what it thinks it's doing.
I usually have something like this in my useful script directory:
#!/bin/bash
# Provide usage information if not arguments were supplied
if [[ "$#" -le 0 ]]; then
echo "Usage: $0 <executable> [<argument>...]" >&2
exit 1
fi
# Get the executable by removing the last slash and anything before it
X="${1##*/}"
# Get the directory by removing the executable name
D="${1%$X}"
# Check if the directory exists
if [[ -d "$D" ]]; then
# If it does, cd into it
cd "$D"
else
if [[ "$D" ]]; then
# Complain if a directory was specified, but does not exist
echo "Directory '$D' does not exist" >&2
exit 1
fi
fi
# Check if the executable is, well, executable
if [[ -x "$X" ]]; then
# Run the executable in its directory with the supplied arguments
exec ./"$X" "${#:2}"
else
# Complain if the executable is not a valid
echo "Executable '$X' does not exist in '$D'" >&2
exit 1
fi
Usage:
$ cdexec
Usage: /home/archon/bin/cdexec <executable> [<argument>...]
$ cdexec /bin/ls ls
ls
$ cdexec /bin/xxx/ls ls
Directory '/bin/xxx/' does not exist
$ cdexec /ls ls
Executable 'ls' does not exist in '/'
One source of such error messages under those conditions is a broken symlink.
However, you say the script works when run from the command line. I would also check to see whether the directory is a symlink that's doing something other than what you expect.
Does it work if you call it in your script with the full path instead of using cd?
#!/bin/bash
/path/to/a/folder/scriptname
What about when called that way from the command line?

Shell scripting debug help - Iterating through files in a directory

#!/bin/sh
files = 'ls /myDir/myDir2/myDir3/'
for file in $files do
echo $file
java myProg $file /another/directory/
done
What i'm trying to do is iterate through every file name under /myDir/myDir2/myDir3/, then use that file name as the first argument in calling a java program (second argument is "/another/directory")
When I run this script: . myScript.sh
I get this error:
-bash: files: command not found
What did I do wrong in my script? Thanks!
Per Neeaj's answer, strip off the whitespace from files =.
Better yet, use:
#!/bin/sh -f
dir=/myDir/MyDir2/MyDir3
for path in $dir/*; do
file=$(basename $path)
echo "$file"
java myProg "$file" arg2 arg3
done
Bash is perfectly capable of expanding the * wildcard itself, without spawning a copy of ls to do the job for it!
EDIT: changed to call basename rather than echo to meet OP's (previously unstated) requirement that the path echoed be relative and not absolute. If the cwd doesn't matter, then even better I'd go for:
#!/bin/sh -f
cd /myDir/MyDir2/MyDir3
for file in *; do
echo "$file"
java myProg "$file" arg2 arg3
done
and avoid the calls to basename altogether.
strip off the whitespace in and after files = as files=RHS of assignment
Remove the space surrounding the '=' : change
files = 'ls /myDir/myDir2/myDir3/'
into:
files='ls /myDir/myDir2/myDir3/'
and move the 'do' statement to its own line:
for file in $files
do
....
quote your variables and no need to use ls.
#!/bin/sh
for file in /myDir/myDir2/*
do
java myProg "$file" /another/directory/
done

Resources