With the following example:
.PHONY: hook1 hook2
# Default target
all: hook1 hook2 hook1
echo "Calling all"
hook1:
echo "Calling hook1"
hook2:
echo "Calling hook2"
I got the output:
$ make
echo "Calling hook1"
Calling hook1
echo "Calling hook2"
Calling hook2
echo "Calling all"
Calling all
But I would expect it to call the rule hook1 two times. This is because on Latex, I need to call the same program with the same command line, multiple times. Then I would like to reuse the make rules. For example, I would expect the above minimal code to be ran as:
$ make
echo "Calling hook1"
Calling hook1
echo "Calling hook2"
Calling hook2
echo "Calling hook1"
Calling hook1
echo "Calling all"
Calling all
Which call the same rule twice. This is code bellow my full main Makefile code, if anyone is interested. I tried to create the dummy rules pdflatex_hook1 and pdflatex_hook2 so I could do the call hierarchy pdflatex_hook biber_hook pdflatex_hook, but this do not fooled make and it still ignoring my last call to pdflatex_hook2:
#!/usr/bin/make -f
# https://stackoverflow.com/questions/7123241/makefile-as-an-executable-script-with-shebang
ECHOCMD:=/bin/echo -e
# The main latex file
THESIS_MAIN_FILE = modelomain.tex
# This will be the pdf generated
THESIS_OUTPUT_NAME = thesis
# This is the folder where the temporary files are going to be
CACHE_FOLDER = setup/cache
# Find all files ending with `main.tex`
LATEX_SOURCE_FILES := $(wildcard *main.tex)
# Create a new variable within all `LATEX_SOURCE_FILES` file names ending with `.pdf`
LATEX_PDF_FILES := $(LATEX_SOURCE_FILES:.tex=.pdf)
# GNU Make silent by default
# https://stackoverflow.com/questions/24005166/gnu-make-silent-by-default
MAKEFLAGS += --silent
.PHONY: clean pdflatex_hook1 pdflatex_hook2 %.pdf %.tex
# How do I write the 'cd' command in a makefile?
# http://stackoverflow.com/questions/1789594/how-do-i-write-the-cd-command-in-a-makefile
.ONESHELL:
# Default target
all: biber
##
## Usage:
## make <target>
##
## Targets:
## biber build the main file with bibliography pass
## pdflatex build the main file with no bibliography pass
##
# Print the usage instructions
# https://gist.github.com/prwhite/8168133
help:
#fgrep -h "##" $(MAKEFILE_LIST) | fgrep -v fgrep | sed -e 's/\\$$//' | sed -e 's/##//'
# Where to find official (!) and extended documentation for tex/latex's commandline options (especially -interaction modes)?
# https://tex.stackexchange.com/questions/91592/where-to-find-official-and-extended-documentation-for-tex-latexs-commandlin
PDF_LATEX_COMMAND = pdflatex --time-statistics --synctex=1 -halt-on-error -file-line-error
LATEX = $(PDF_LATEX_COMMAND)\
--interaction=batchmode\
-jobname="$(THESIS_OUTPUT_NAME)"\
-output-directory="$(CACHE_FOLDER)"\
-aux-directory="$(CACHE_FOLDER)"
# Run pdflatex, biber, pdflatex
biber: biber_hook pdflatex_hook2
# Calculate the elapsed seconds and print them to the screen
. ./setup/scripts/timer_calculator.sh
showTheElapsedSeconds "$(current_dir)"
# Internally called rule which does not attempt to show the elapsed time
biber_hook: pdflatex_hook1
# Creates the shell variable `current_dir` within the current folder path
$(eval current_dir := $(shell pwd)) echo $(current_dir) > /dev/null
biber "$(CACHE_FOLDER)/$(THESIS_OUTPUT_NAME)"
# This rule will be called for every latex file and pdf associated
pdflatex: $(LATEX_PDF_FILES)
# Calculate the elapsed seconds and print them to the screen
. ./setup/scripts/timer_calculator.sh
showTheElapsedSeconds "$(current_dir)"
# Not show the elapsed time when called internally
pdflatex_hook1: $(LATEX_PDF_FILES)
pdflatex_hook2: $(LATEX_PDF_FILES)
%.pdf: %.tex
# Start counting the compilation time and import its shell functions
. ./setup/scripts/timer_calculator.sh
# Creates the shell variable `current_dir` within the current folder path
$(eval current_dir := $(shell pwd)) echo $(current_dir) > /dev/null
#$(LATEX) $<
cp $(CACHE_FOLDER)/$(THESIS_OUTPUT_NAME).pdf $(current_dir)/$(THESIS_OUTPUT_NAME).pdf
Here when I call the rule make biber I got the output:
$ make biber
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (MiKTeX 2.9.6400)
entering extended mode
gross execution time: 62751 ms
user mode: 58406 ms, kernel mode: 1359 ms, total: 59765
INFO - This is Biber 2.7
INFO - Logfile is 'setup/cache/thesis.blg'
INFO - Reading 'setup/cache/thesis.bcf'
INFO - Found 14 citekeys in bib section 0
INFO - Processing section 0
INFO - Looking for bibtex format file 'modeloreferences.bib' for section 0
INFO - Decoding LaTeX character macros into UTF-8
INFO - Found BibTeX data source 'modeloreferences.bib'
INFO - Overriding locale 'pt-BR' defaults 'normalization = NFD' with 'normalization = prenormalized'
INFO - Overriding locale 'pt-BR' defaults 'variable = shifted' with 'variable = non-ignorable'
INFO - Sorting list 'nty/global/' of type 'entry' with scheme 'nty' and locale 'pt-BR'
INFO - No sort tailoring available for locale 'pt-BR'
INFO - Writing 'setup/cache/thesis.bbl' with encoding 'UTF-8'
INFO - Output to setup/cache/thesis.bbl
Could not calculate the seconds to run
Which is missing the second call to the rule pdflatex_hook2, only the first call to pdflatex_hook1 is being performed.`
I already know about latexmk and use it, but for biber above I would like to do these calls as they are. For latexmk I use this recipe/rule:
thesis: $(THESIS_MAIN_FILE)
# Start counting the compilation time and import its shell functions
. ./setup/scripts/timer_calculator.sh
# Creates the shell variable `current_dir` within the current folder path
$(eval current_dir := $(shell pwd)) echo $(current_dir) > /dev/null
# What is the difference between “-interaction=nonstopmode” and “-halt-on-error”?
# https://tex.stackexchange.com/questions/258814/what-is-the-difference-between-interaction-nonstopmode-and-halt-on-error
#
# What reasons (if any) are there for compiling in interactive mode?
# https://tex.stackexchange.com/questions/25267/what-reasons-if-any-are-there-for-compiling-in-interactive-mode
latexmk \
-pdf \
-silent \
-jobname="$(THESIS_OUTPUT_NAME)" \
-output-directory="$(CACHE_FOLDER)" \
-aux-directory="$(CACHE_FOLDER)" \
-pdflatex="$(PDF_LATEX_COMMAND) --interaction=batchmode" \
-use-make $(THESIS_MAIN_FILE)
# Copy the generated PDF file from the cache folder
cp $(CACHE_FOLDER)/$(THESIS_OUTPUT_NAME).pdf $(current_dir)/$(THESIS_OUTPUT_NAME).pdf
# Calculate the elapsed seconds and print them to the screen
showTheElapsedSeconds "$(current_dir)"
Related questions:
Change a make variable, and call another rule, from a recipe in same Makefile?
How to manually call another target from a make target?
multiple targets from one recipe and parallel execution
To answer your initial question:
.PHONY: hook1 hook2
# Default target
all: hook1a hook2 hook1b
echo "Calling all"
hook1a hook1b:
echo "Calling hook1"
hook2:
echo "Calling hook2"
Produces the following output:
echo "Calling hook1"
Calling hook1
echo "Calling hook2"
Calling hook2
echo "Calling hook1"
Calling hook1
echo "Calling all"
Calling all
As illustrated in make recipe execute twice
Based on #David White answer I fixed my main script with the double recursion:
#!/usr/bin/make -f
# https://stackoverflow.com/questions/7123241/makefile-as-an-executable-script-with-shebang
ECHOCMD:=/bin/echo -e
# The main latex file
THESIS_MAIN_FILE = modelomain.tex
# This will be the pdf generated
THESIS_OUTPUT_NAME = thesis
# This is the folder where the temporary files are going to be
CACHE_FOLDER = setup/cache
# Find all files ending with `main.tex`
LATEX_SOURCE_FILES := $(wildcard *main.tex)
# Create a new variable within all `LATEX_SOURCE_FILES` file names ending with `.pdf`
LATEX_PDF_FILES := $(LATEX_SOURCE_FILES:.tex=.pdf)
# GNU Make silent by default
# https://stackoverflow.com/questions/24005166/gnu-make-silent-by-default
MAKEFLAGS += --silent
.PHONY: clean biber pdflatex_hook1 pdflatex_hook2
# How do I write the 'cd' command in a makefile?
# http://stackoverflow.com/questions/1789594/how-do-i-write-the-cd-command-in-a-makefile
.ONESHELL:
# Default target
all: thesis
##
## Usage:
## make <target>
##
## Targets:
## all call the `thesis` make rule
## biber build the main file with bibliography pass
## latex build the main file with no bibliography pass
## thesis completely build the main file with minimum output logs
## verbose completely build the main file with maximum output logs
##
# Print the usage instructions
# https://gist.github.com/prwhite/8168133
help:
#fgrep -h "##" $(MAKEFILE_LIST) | fgrep -v fgrep | sed -e 's/\\$$//' | sed -e 's/##//'
# Where to find official (!) and extended documentation for tex/latex's commandline options (especially -interaction modes)?
# https://tex.stackexchange.com/questions/91592/where-to-find-official-and-extended-documentation-for-tex-latexs-commandlin
PDF_LATEX_COMMAND = pdflatex --time-statistics --synctex=1 -halt-on-error -file-line-error
LATEX = $(PDF_LATEX_COMMAND)\
--interaction=batchmode\
-jobname="$(THESIS_OUTPUT_NAME)"\
-output-directory="$(CACHE_FOLDER)"\
-aux-directory="$(CACHE_FOLDER)"
# Run pdflatex, biber, pdflatex
biber: start_timer pdflatex_hook1 biber_hook pdflatex_hook2
# Creates the shell variable `current_dir` within the current folder path
$(eval current_dir := $(shell pwd)) echo $(current_dir) > /dev/null
# Copies the PDF to the current folder
cp $(CACHE_FOLDER)/$(THESIS_OUTPUT_NAME).pdf $(current_dir)/$(THESIS_OUTPUT_NAME).pdf
# Calculate the elapsed seconds and print them to the screen
. ./setup/scripts/timer_calculator.sh
showTheElapsedSeconds "$(current_dir)"
start_timer:
# Start counting the elapsed seconds to print them to the screen later
. ./setup/scripts/timer_calculator.sh
# Internally called rule which does not attempt to show the elapsed time
biber_hook:
# Creates the shell variable `current_dir` within the current folder path
$(eval current_dir := $(shell pwd)) echo $(current_dir) > /dev/null
# Call biber to process the bibliography
biber "$(CACHE_FOLDER)/$(THESIS_OUTPUT_NAME)"
# How to call Makefile recipe/rule multiple times?
# https://stackoverflow.com/questions/46135614/how-to-call-makefile-recipe-rule-multiple-times
pdflatex_hook1 pdflatex_hook2:
#$(LATEX) $(LATEX_SOURCE_FILES)
# This rule will be called for every latex file and pdf associated
latex: $(LATEX_PDF_FILES)
# Calculate the elapsed seconds and print them to the screen
. ./setup/scripts/timer_calculator.sh
showTheElapsedSeconds "$(current_dir)"
# Dynamically generated recipes for all PDF and latex files
%.pdf: %.tex
# Start counting the compilation time and import its shell functions
. ./setup/scripts/timer_calculator.sh
# Creates the shell variable `current_dir` within the current folder path
$(eval current_dir := $(shell pwd)) echo $(current_dir) > /dev/null
#$(LATEX) $<
cp $(CACHE_FOLDER)/$(THESIS_OUTPUT_NAME).pdf $(current_dir)/$(THESIS_OUTPUT_NAME).pdf
thesis: $(THESIS_MAIN_FILE)
# Start counting the compilation time and import its shell functions
. ./setup/scripts/timer_calculator.sh
# Creates the shell variable `current_dir` within the current folder path
$(eval current_dir := $(shell pwd)) echo $(current_dir) > /dev/null
# What is the difference between “-interaction=nonstopmode” and “-halt-on-error”?
# https://tex.stackexchange.com/questions/258814/what-is-the-difference-between-interaction-nonstopmode-and-halt-on-error
#
# What reasons (if any) are there for compiling in interactive mode?
# https://tex.stackexchange.com/questions/25267/what-reasons-if-any-are-there-for-compiling-in-interactive-mode
latexmk \
-pdf \
-silent \
-jobname="$(THESIS_OUTPUT_NAME)" \
-output-directory="$(CACHE_FOLDER)" \
-aux-directory="$(CACHE_FOLDER)" \
-pdflatex="$(PDF_LATEX_COMMAND) --interaction=batchmode" \
-use-make $(THESIS_MAIN_FILE)
# Copy the generated PDF file from the cache folder
cp $(CACHE_FOLDER)/$(THESIS_OUTPUT_NAME).pdf $(current_dir)/$(THESIS_OUTPUT_NAME).pdf
# Calculate the elapsed seconds and print them to the screen
showTheElapsedSeconds "$(current_dir)"
verbose: $(THESIS_MAIN_FILE)
# Start counting the compilation time and import its shell functions
. ./setup/scripts/timer_calculator.sh
# Creates the shell variable `current_dir` within the current folder path
$(eval current_dir := $(shell pwd)) echo $(current_dir) > /dev/null
# What is the difference between “-interaction=nonstopmode” and “-halt-on-error”?
# https://tex.stackexchange.com/questions/258814/what-is-the-difference-between-interaction-nonstopmode-and-halt-on-error
#
# What reasons (if any) are there for compiling in interactive mode?
# https://tex.stackexchange.com/questions/25267/what-reasons-if-any-are-there-for-compiling-in-interactive-mode
latexmk \
-pdf \
-jobname="$(THESIS_OUTPUT_NAME)" \
-output-directory="$(CACHE_FOLDER)" \
-aux-directory="$(CACHE_FOLDER)" \
-pdflatex="$(PDF_LATEX_COMMAND) --interaction=nonstopmode" \
-use-make $(THESIS_MAIN_FILE)
# Copy the generated PDF file from the cache folder
cp $(CACHE_FOLDER)/$(THESIS_OUTPUT_NAME).pdf $(current_dir)/$(THESIS_OUTPUT_NAME).pdf
# Calculate the elapsed seconds and print them to the screen
showTheElapsedSeconds "$(current_dir)"
# Using Makefile to clean subdirectories
# https://stackoverflow.com/questions/26007005/using-makefile-to-clean-subdirectories
#
# Exclude directory from find . command
# https://stackoverflow.com/questions/4210042/exclude-directory-from-find-command
GARBAGE_TYPES := "*.gz(busy)" *.aux *.log *.pdf *.aux *.bbl *.log *.out *.toc *.dvi *.blg\
*.synctex.gz *.fdb_latexmk *.fls *.lot *.lol *.lof *.idx
DIRECTORIES_TO_CLEAN := $(shell /bin/find -not -path "./**.git**" -not -path "./pictures**" -type d)
GARBAGE_TYPED_FOLDERS := $(foreach DIR, $(DIRECTORIES_TO_CLEAN), $(addprefix $(DIR)/,$(GARBAGE_TYPES)))
clean:
rm -rfv $(GARBAGE_TYPED_FOLDERS)
# veryclean:
# git clean -dxf
Related
I have a script that submits processing jobs to a queue. Before I submit the jobs, I assign the string variables to each respective data point so I can use them as the arguments before I submit the jobs through qsub.
I had to fix up the module I'm loading first by putting in a -v variable to set up my working environment. I got the error message that is in the title however, and looking around there is very limited resources to debugging it. One resource I found seems to have led me in the direction of the potential likelihood of an extraneous space in the qsub command itself. Has anyone run into this?
I also did echo on my qsub command to make sure it was being inputted correctly, as it was.
Here's my script:
#!/bin/bash
# This script is for submitting the initial registration subjects for Greedy registration.
# It can serve as a template for later studies when multiple submissions could be handy
# GO_HOME = Origin diqrectory for all niftis of interest
GO_NIFTI="/gpfs/fs001/medorg/comp_space/myname/Test-Retest/Nifti/"
GO_B0="/gpfs/fs001/medorg/comp_space/myname/Test-Retest/Protocols/ants_SyNBaseline/W_Registration_antsSyN_Baseline7/"
GO_FM="/gpfs/fs001/medorg/comp_space/myname/Test-Retest/Protocols/brainmage_batch_t1/"
FINAL_DESTINATION="/gpfs/fs001/cbica/comp_space/wingerti/Test-Retest/Protocols/Registration_greedy_Rigid/"
cd $GO_NIFTI
nii_directories=($(find . -type d -name "*t1*" -o -name "*t0*" -o -name "*t2*" -maxdepth 1 ))
module load greedy
# Will look at these subjects individually, taking them out list to not run DTI_Preprocess
unset nii_directories[27] # 1000009_t0_test
unset nii_directories[17] # 1000001_t0
unset nii_directories[4] # 1000009_t2
# With directories, navigate into each, and find where the suitable niis are (31dir and 33dir)
for g in "${nii_directories[#]}";
do
# Subject ID argument
subjid=${g:2:9}
echo "$subjid is the subject ID..."
# -i argument (T1 NIFTI File and DTI)
cd $GO_NIFTI
nii_is=$(find $subjid -type f -name ${subjid}_T1.nii.gz)
nii_i=${GO_NIFTI}${nii_is}
cd $GO_B0
cd $subjid
GO_B0_2=$PWD
b0_is=$(find . -type f -name b0.nii.gz)
b0_i=${GO_B0_2}${b0_is}
echo "-i arguments for $subjid is $nii_i and $b0_i"
# -m argument (Mask File)
#cd $GO_DTI
#mask_ms=$(find $subjid -type f -name ${subjid}_tensor_mask.nii.gz)
#mask_m=${GO_DTI}${mask_ms}
#echo "-m argument for $subjid is $mask_m"
# -fm argument (T1 mask)
cd $GO_FM
mask_fms=$(find $subjid -type f -name ${subjid}_t1_brain_mask.nii.gz)
mask_fm=${GO_FM}${mask_fms}
echo "-fm argument for $subjid is $mask_fm"
# -o argument (Working Directory for possible debugging and tmp dir organization among experiments)
cd $FINAL_DESTINATION
g=${FINAL_DESTINATION:73:-1}
experiment_name="${subjid}_${g}"
mkdir $experiment_name
output_o=${FINAL_DESTINATION}${experiment_name}/${experiment_name}_rigid.txt
echo "-o argument for $g is $output_o"
#
printf "\nSubmitting the following command: \n
qsub -m beas -M myname#medschool.edu -N Registration_${experiment_name} "$(which greedy)" -d3 -i $nii_i $b0_i -o $output_o -a -m MI -n 100x100 -fm $mask_fm dof 6 -ia-identity\n
as JobID: Registration_${experiment_name}\n\n"
qsub -v /medorg/software/external/greedy/centos7/c6dca2e -m beas -M myname#medschool.edu -N Registration_${experiment_name} "$(which greedy)" -d3 -i $nii_i $b0_i -o $output_o -a -m MI -n 100x100 -fm $mask_fm dof 6 -ia-identity
# --- Above line submits Greedy Rigid jobs (dof 6) with
# --- "-m" for emailing updates on jobs, inbox sorts job submission emails
# --- "-N" names the job for book-keeping
cd $GO_NIFTI
done
I have a project which use makefile to control vagrant, I want to put the vagrant parameter into the makefile, such as cpu, memory, ip, hostname, forwarded_port and the like. I find a way that vagrantfile read yaml file to parameterize vagrantfile. So makefile needs a target to read all the user option variables and write them to config.yaml as key-value pairs.
The sample is as follows
# === BEGIN USER OPTIONS ===
BOX_OS ?= fedora
# Box setup
#BOX_IMAGE
# Disk setup
DISK_COUNT ?= 1
DISK_SIZE_GB ?= 25
# VM Resources
MASTER_CPUS ?= 2
MASTER_MEMORY_SIZE_GB ?= 2
NODE_CPUS ?= 2
NODE_MEMORY_SIZE_GB ?= 2
NODE_COUNT ?= 2
# Network
MASTER_IP ?= 192.168.26.10
NODE_IP_NW ?= 192.168.26.
POD_NW_CIDR ?= 10.244.0.0/16
...
...
# === END USER OPTIONS ===
The echo command does achieve it
# Makefile
envInit:
#echo "POD_NW_CIDR : \"$(POD_NW_CIDR)\"" > ${FILECWD}/configs.yaml
But too many variables can be too complex.
Is there a way to bulk read variables and their values and write them to a yml file
I would very appreciate it if you guys can tell me how to achieve it that bulk read variables and their values and write them to a yml file.
Define all user options (along with the default values) as a list, so that they are iterable:
# list of user options with default values
userOptions = \
BOX_OS=2 \
DISK_COUNT=1 \
MASTER_IP=192.168.26.10
# replace each default value with the env value, if any
userOptionValues = $(foreach i, $(userOptions), \
$(word 1, $(subst =, ,$i))=$(or \
$($(word 1, $(subst =, ,$i))), $($(word 1, $(subst =, ,$i))), $(word 2, $(subst =, ,$i))))
# write the yaml file
envInit:
# empty the file
#printf "" > configs.yaml
# write a line for each option
#for i in $(userOptionValues); do \
printf "%s : %s\n" "$$(printf $$i | cut -d= -f1)" "$$(printf $$i | cut -d= -f2)" >> configs.yaml; \
done
#flyx Thank you for you answer, your code does work great. But I seem to have found a more convenient way, and I've partially modified it.
printvars:
#echo$(foreach V,$(sort $(.VARIABLES)), \
$(if $(filter-out environment% default automatic,$(origin $V)),$(info $V: $($V))))
But there is still a gap between achieving the goal.
# the Makefile test file
FILECWD = $(shell pwd)
# === BEGIN USER OPTIONS ===
CLOUD_IP ?= 192.168.79.222
CLOUD_NAME ?= cloud
CLOUD_CPU ?= 6
CLOUD_MEMORY ?= 8
# === END USER OPTIONS ===
printvars:
#echo$(foreach V,$(sort $(.VARIABLES)), \
$(if $(filter-out environment% default automatic,$(origin $V)),$(info $V: $($V))))
make printvars's output contains a number of other variables
$ make printvars
.DEFAULT_GOAL: printvars
CLOUD_IP: 192.168.79.222
CLOUD_MEMORY: 8
CLOUD_NAME: cloud
CURDIR: /testmakecreateyml0930
FILECWD: /testmakecreateyml0930
GNUMAKEFLAGS:
MAKEFILE_LIST: Makefile
MAKEFLAGS:
SHELL: /bin/sh
And it can only be printed and not exported to the yaml file.This is only one step away from success.
I would appreciate it if you could help me modify it to achieve my goal
You can write directly to a file with GNUmakes $(file) function:
define newline :=
$(strip)
$(strip)
endef
space := $(strip) $(strip)#
-never-matching := ¥# character 165, this is used as a list element that should never appear as a real element
option-names = $(subst $(-never-matching),,$(filter $(-never-matching)%,$(subst $(-never-matching)$(space),$(-never-matching),$(-never-matching)$(strip $(subst $(newline), $(-never-matching),$1)))))
# define your user options in as many separate parts as you like, spaces and empty lines included:
define USER_OPTIONS +=
a = spaces are no problem
b = "neither nearly all 'other' characters: 8&)("
endef
define USER_OPTIONS +=
c = baz baf
d = foobar
endef
# make all definition make variables verbatim
$(eval $(USER_OPTIONS))
YAML_FORMAT := $(foreach name,$(call option-names,$(USER_OPTIONS)),$(newline)$(name) : $($(name)))
# write the file. Warning: this happens before any rule is run!
$(file >test.yaml,$(YAML_FORMAT))
$(info $(foreach name,$(call option-names,$(USER_OPTIONS)),<$(name) : $($(name))> ))
The trick lies in the clustering of all relevant user option variables in one multi-line make variable. The function option-names pulls all identifiers from that variable into a separate list.
I took the newline etc. character definitions from the GNUmake table toolkit which has many functions for "programmatic" make.
I am not familiar with shell. I just want to successfully run this piece of code which I downloaded from Github. When I directly ran the code with the command,
make file='example.txt' file_test
it didn't work, and here is the error.
makefile line 1: syntax error: unexpected end of file
I tried to fix the problem by unifying the encoding format of 'Makefile.sh' and 'example.txt' to be Unix, but still it did not work at all.
Then I asked my friend(he is also new to shell), and he told me to delete the "tab" before each line(not including the command lines). The error is different at this time, but I still could not fix it by using Google.
Makefile:75: *** commands commence before first target. Stop.
Please, somebody can help? It is driving me crazy and I have no time left for solving this problem. :( Thanks!!!
And, here is the code after I deleted the 'TAB'.
# State-of-the-art paper: http://www.aclweb.org/anthology/P18-1016
########################################Definitions########################################
SHELL := /bin/bash
NC := \033[0m
RED := \033[0;31m
GREEN := \033[0;32m
CYAN := \033[0;36m
work_dir := $(PWD)
script_dir := $(work_dir)/scripts
NTS_dir := $(work_dir)/NeuralTextSimplification_model
file?=""
filetype := $(shell file $(file) | cut -d: -f2)
article_name := $(basename $(notdir $(file)))
article_simple := $(basename $(file))-simple.txt
NTS_script := $(NTS_dir)/src/scripts/translate.sh
NTS_input := $(NTS_dir)/data/test.en
NTS_result_file = $(NTS_dir)/results_NTS/result_NTS_epoch11_10.19.t7_5
# passage = xml file representing a sentence parsed with TUPA
sentence_dir = $(script_dir)/sentences/$(article_name)
passage_dir = $(script_dir)/passages/$(article_name)
sentences = $(sentence_dir)/*.txt
passages = $(passage_dir)/*.xml
############################################################################################
.SILENT:
.ONESHELL: # To execute all commands in one single bash shell
.PHONY: all file_test NTS DSS tupa_parse split_to_sentences help clean
all: file_test $(article_simple)
printf "\n${GREEN}Article simplified :\n====================${NC}\n\n"
cat $(article_simple)
# evaluate the system results using BLUE & SARI metrics
# The original test.en must be given to evaluate (https://github.com/senisioi/NeuralTextSimplification)
evaluate: all
printf "\n${GREEN}Evaluating: (the original test.en must be given)${NC}\n\n"
python $(NTS_dir)/src/evaluate.py $(NTS_input) $(NTS_dir)/data/references/references.tsv $(NTS_dir)/predictions
############################################################################################
# The following targets are only useful as aliases for testing
# Generates the corresponding NTS model result file.
NTS: file_test $(NTS_result_file)
# Generates the article sentences split using the two semantic rules mentioned in the paper.
DSS: file_test $(NTS_input)
# Parses the article's sentences using TUPA -> output : xml files
tupa_parse: file_test $(passages)
# Splits the article into sentences, each one in a single file
split_to_sentences: file_test $(sentences)
#############################################################################################
# Testing the validity of the file
file_test:
# Whether the file was provided as an argument
if [ $(file) = "" ]; then
printf "${RED}ERROR${NC}: One & only file must be given in argument! Please specify it by:\nmake file=<file_name> <target>\n\n"; exit 1
fi
# Whether the file exists
if [ ! -f $(file) ]; then
printf "$(RED)ERROR${NC}: $(file) File not found!\n\n"; exit 1
fi
# Whether the file is empty
if [ ! -s $(file) ]; then
printf "$(RED)ERROR${NC}: $(file) is empty!\n\n"; exit 1
fi
# Whether it is a text file
if [[ ! "$(filetype)" = *"ASCII"* && ! "$(filetype)" = *"UTF-8"* ]]; then
printf "$(RED)ERROR${NC}: Only text files (ASCII or UTF-8 Unicode text) are accepted!\n\n"; exit 1
fi
# Prints help on the useful targets for the user
help:
printf "\n${CYAN}make file=<file_path>${NC} : Executes the simplification completely and avoids rebuilding if unnecessary.\n\n"
printf "${CYAN}make file=<file_path> file_test${NC} : Tests whether the given file is valid.\n\n"
printf "${CYAN}make file=<file_path> NTS${NC} : Generates the corresponding NTS model result file.\n\n"
printf "${CYAN}make file=<file_path> DSS${NC} : Generates the article's sentences split using the two semantic rules mentioned in the paper.\n\n"
printf "${CYAN}make file=<file_path> tupa_parse${NC} : Parses the article's sentences using TUPA -> output : xml files.\n\n"
printf "${CYAN}make file=<file_path> split_to_sentences${NC} : Splits the article into sentences, each one in a single file.\n\n"
printf "${CYAN}make clean${NC} : Cleans results of previous executions.\n\n"
##############################################################################################
# Produces the simple version of text as an output and writes it to "article_simple" variable
$(article_simple): $(NTS_result_file)
cd $(work_dir)
cat $(NTS_result_file) > $(article_simple)
# Equivalent for target NTS
$(NTS_result_file): $(NTS_script) $(NTS_input)
printf "\n${GREEN}Simplifying sentences using the NTS model : ... ${NC}\n\n"
cd $(NTS_dir)/src/scripts/
source ./translate.sh
# Equivalent for target DSS
$(NTS_input): $(script_dir)/split_sentences.py $(passages)
printf "\n${GREEN}Splitting the article's sentences : ... ${NC}\n"
python $(script_dir)/split_sentences.py $(passage_dir) > $(NTS_input)
# If the output of the splitting is empty, then it failed
if [ ! -s $(NTS_input) ]; then
printf "$(RED)ERROR${NC}: Splitting failed!\n\n"; exit 1
fi
# Equivalent for target tupa_parse
$(passages): $(sentences) | $(passage_dir)
printf "\n${GREEN}Parsing the article's sentences : ... ${NC}\n\n"
python -m tupa $(sentences) -m $(work_dir)/TUPA_models/ucca-bilstm
mv *.xml $(passage_dir)
# Equivalent for target split_to_sentences
$(sentences): $(file) $(script_dir)/article_to_sentences.py | $(sentence_dir)
python $(script_dir)/article_to_sentences.py $(file)
# Creates the passage directory
$(passage_dir): $(file)
if [ -d $(passage_dir) ]; then
rm -f $(passage_dir)/*.xml
else
mkdir -p $(script_dir)/passages/
mkdir $(passage_dir)/
fi
# Creates the sentence directory
$(sentence_dir): $(file)
if [ -d $(sentence_dir) ]; then
rm -f $(sentence_dir)/*.txt
else
mkdir -p $(script_dir)/sentences/
mkdir $(sentence_dir)
fi
# Cleans the residues from previous executions
clean:
rm -rf $(passage_dir)* $(sentence_dir)*
echo > $(NTS_result_file)
I have a makefile and I want to add a new target which can read a specific line of a .txt file and do the process from $4 onwards.
I already have run the code and made the database out of the images in URLs and converted them as well ($2 and $3).
By the way, if you think it is not a reasonable way to do it (pass specific inputs and collect corresponding outputs), please let me know how is it possible to be done in a reasonable fashion?
#Setup test images and check target
#
#URLS of images to test
IMAGE_URLS= http://farm1.static.flickr.com/93/238836380_a4db5526a9.jpg \
http://farm1.static.flickr.com/203/495381063_67fe69a64f.jpg \
http://farm1.static.flickr.com/93/238836380_a4db5526a9.jpg \
http://farm1.static.flickr.com/203/495381063_67fe69a64f.jpg \
http://farm3.static.flickr.com/2068/2218230147_c6559cd7ac.jpg \
http://farm2.static.flickr.com/1020/1459940961_6a54469e1e.jpg \
http://farm2.static.flickr.com/1140/1026808473_e4a2a76ded.jpg \
http://farm1.static.flickr.com/143/341257611_e730dfea3d.jpg \
http://farm3.static.flickr.com/2079/2226345732_0152a169fd.jpg \
http://farm3.static.flickr.com/2168/1834178819_e866ed3c04.jpg \
http://farm1.static.flickr.com/149/409910004_068c0fdec1.jpg \
#Classification index of test images (not important)
CLASS_IDX= 281\
281\
285\
291\
728\
279\
285\
281\
281
# Getters
JOINED = $(join $(addsuffix #,$(IMAGE_URLS)),$(CLASS_IDX))
GET_URL = $(word 1,$(subst #, ,$1))
GET_IDX = $(word 2,$(subst #, ,$1))
#Bash coloring
RED=\033[0;31m
GREEN=\033[0;32m
NC=\033[0m
#$1=URL $2=NAME $3=CONVERTED_PNG $4=CHECK $5=IDX
define IMAGE_BUILD_RULES
#download image
$2:
wget "$(strip $1)" -O $2
#convert
$3:$2
convert $2 -resize 224x224! $3
#check if correct class is identified. If not error
$4:$3 $(EXE)
#echo "Evaluating image $3"
./$(EXE) $3 | tee $4
#grep -q "Detected class: $(strip $5)" $4 && echo "$(GREEN)correctly identified image $2$(NC)" || (echo "$(RED)Did not correctly identify image $2$(NC)")
endef
#check if all images are classified correctly
check_all: $(foreach URL, $(IMAGE_URLS), check_$(basename $(notdir $(URL))))
#echo "$(GREEN)All correct!$(NC)"
#define build rules for all images
$(foreach j,$(JOINED),$(eval $(call IMAGE_BUILD_RULES,\
$(call GET_URL, $j),\
$(notdir $(call GET_URL, $j)),\
converted_$(basename $(notdir $(call GET_URL, $j))).png,\
check_$(basename $(notdir $(call GET_URL, $j))),\
$(call GET_IDX,$j)\
)))
I don't know if this helps much, but it seems you are rethinking your question anyway. Your makefile looks like it is doing some kind of build management, at least remotely. (Note that you could write your rule with patterns, but you seem to aim for something different.) The following rewrite of your makefile uses the GNUmake table toolkit which looks like a good fit for your purpose:
include gmtt/gmtt.mk
# Direct definition of the table in the file:
# define IMAGES :=
# 2
# http://farm1.static.flickr.com/93/238836380_a4db5526a9.jpg 281
# http://farm1.static.flickr.com/203/495381063_67fe69a64f.jpg 281
# http://farm1.static.flickr.com/93/238836380_a4db5526a9.jpg 285
# http://farm1.static.flickr.com/203/495381063_67fe69a64f.jpg 291
# http://farm3.static.flickr.com/2068/2218230147_c6559cd7ac.jpg 728
# http://farm2.static.flickr.com/1020/1459940961_6a54469e1e.jpg 279
# http://farm2.static.flickr.com/1140/1026808473_e4a2a76ded.jpg 285
# http://farm1.static.flickr.com/143/341257611_e730dfea3d.jpg 281
# http://farm3.static.flickr.com/2079/2226345732_0152a169fd.jpg 281
# http://farm3.static.flickr.com/2168/1834178819_e866ed3c04.jpg ???
# http://farm1.static.flickr.com/149/409910004_068c0fdec1.jpg ???
# endef
# Reading table from a file. File *must* be two columns (url, class) and nothing else - no comments either!
IMAGES := $(file < imagelist.txt)
$(info Processing file list:$(newline)$(IMAGES))
IMAGES := 2 $(IMAGES) # add the number of columns in front to make it a gmtt table
#Bash coloring
RED=\033[0;31m
GREEN=\033[0;32m
NC=\033[0m
#$1=URL $2=NAME $3=CONVERTED_PNG $4=CHECK $5=IDX
define IMAGE_BUILD_RULES
#download image
$2:
wget "$(strip $1)" -O $2
#convert
$3:$2
convert $2 -resize 224x224! $3
#check if correct class is identified. If not error
$4:$3 $(EXE)
#echo "Evaluating image $3"
./$(EXE) $3 | tee $4
#grep -q "Detected class: $(strip $5)" $4 && echo "$(GREEN)correctly identified image $2$(NC)" || (echo "$(RED)Did not correctly identify image $2$(NC)")
endef
#check if all images are classified correctly
check_all: $(foreach URL, $(IMAGE_URLS), check_$(basename $(notdir $(URL))))
#echo "$(GREEN)All correct!$(NC)"
# Just as a check for now, remove the whole $(info ...) if output too large
$(info Generating the following rules: $(newline)$(call map-select,1 2,$(IMAGES),t,$$(call IMAGE_BUILD_RULES,$$1,$$(notdir $$1),converted_$$(basename $$(notdir $$1)).png,check_$$(basename $$(notdir $$1)),$$2)))
# Rule definition - no need to call $(eval)
# Use the function `map-select` which applies a function on each selected line of a table. Syntax is $(call map-select,_column-numbers_,_table_,_where-clause_,_mapping-function_)
# We select columns one and two of the table. Where-clause is always true (t). Function is calling IMAGE_BUILD_RULES with the selected columns enumerated as $1,$2,$3,etc. - this parameter must be quoted ($$)
$(call map-select,1 2,$(IMAGES),t,$$(call IMAGE_BUILD_RULES,$$1,$$(notdir $$1),converted_$$(basename $$(notdir $$1)).png,check_$$(basename $$(notdir $$1)),$$2))
The referenced file imagelist.txt must look like this:
http://farm1.static.flickr.com/93/238836380_a4db5526a9.jpg 281
http://farm1.static.flickr.com/203/495381063_67fe69a64f.jpg 281
http://farm1.static.flickr.com/93/238836380_a4db5526a9.jpg 285
http://farm1.static.flickr.com/203/495381063_67fe69a64f.jpg 291
http://farm3.static.flickr.com/2068/2218230147_c6559cd7ac.jpg 728
http://farm2.static.flickr.com/1020/1459940961_6a54469e1e.jpg 279
http://farm2.static.flickr.com/1140/1026808473_e4a2a76ded.jpg 285
http://farm1.static.flickr.com/143/341257611_e730dfea3d.jpg 281
http://farm3.static.flickr.com/2079/2226345732_0152a169fd.jpg 281
http://farm3.static.flickr.com/2168/1834178819_e866ed3c04.jpg ???
http://farm1.static.flickr.com/149/409910004_068c0fdec1.jpg ???
Notice that your testdata has duplicate URLs in it.
I have the following Makefile, but it does not work. When I call
make html
I get a
make: *** No rule to make target `docs/index.html', needed by `html'. Stop.
error, even though I think I have defined it.
SRCDIR = source
OUTDIR = docs
RMD = $(wildcard $(SRCDIR)/*.Rmd)
TMP = $(RMD:.Rmd=.html)
HTML = ${subst $(SRCDIR),$(OUTDIR),$(TMP)}
test:
echo $(RMD)
echo $(TMP)
echo $(HTML)
all: clean update html
html: $(HTML)
%.html: %.Rmd
echo $(HTML)
#Rscript -e "rmarkdown::render('$<', output_format = 'prettydoc::html_pretty', output_dir = './$(OUTDIR)/')"
update:
#Rscript -e "devtools::load_all(here::here()); microcosmScheme:::updateFromGoogleSheet(token = './source/googlesheets_token.rds')"
## from https://stackoverflow.com/a/26339924/632423
list:
#$(MAKE) -pRrq -f $(lastword $(MAKEFILE_LIST)) : 2>/dev/null | awk -v RS= -F: '/^# File/,/^# Finished Make data base/ {if ($$1 !~ "^[#.]") {print $$1}}' | sort | egrep -v -e '^[^[:alnum:]]' -e '^$#$$' | xargs
.PHONY: update clean cleanhtml all list
The variables seem to be correct:
15:21 $ make test
echo source/index.Rmd
source/index.Rmd
echo source/index.html
source/index.html
echo docs/index.html
docs/index.html
If I change it as follow it works, but the target points to the SRCDIR, but I want it to point to the OUTDIR:
RMD = $(wildcard $(SRCDIR)/*.Rmd)
HTML = $(RMD:.Rmd=.html)
# HTML = ${subst $(SRCDIR),$(OUTDIR),$(TMP)}
I am sure it is one small thing...
This rule:
%.html : %.Rmd
....
tells make how to build a file foo.html from a file foo.Rmd, or a file source/foo.html from a file source/foo.Rmd, or a file docs/foo.html from a file docs/foo.Rmd.
It doesn't tell make how to build a file docs/foo.html from a file source/foo.Rmd, because the stem that matches the pattern % is not the same.
If you want to write a pattern for docs/foo.html to be built from source/foo.Rmd, you have to write it like this:
$(OUTDIR)/%.html : $(SRCDIR)/%.Rmd
....
so that the part that matches the pattern % is identical.
ETA Some other notes: you should be using := with the wildcard function as it's much better performing. Also you shouldn't use subst here because it replaces all occurrences of the string which could break things if any of your .Rmd files contain the string source for example (e.g., source/my_source_file.Rmd. This is much better written with patsubst, as in:
RMD := $(wildcard $(SRCDIR)/*.Rmd)
HTML := $(patsubst $(SRCDIR)/%.Rmd,$(OBJDIR)/%.html,$(RMD))
Finally, you don't show what the clean target does but it's unusual to have the clean target depended on by all. Usually it's a separate target that is invoked only when you want it, like make clean.