So my organization is upgrading our Oracle database from 11g to 19c.
Previously, in my makefile, I had been setting ORACLE_HOME like this:
ORACLE_HOME=/opt/app/oracle/product/11.2.0.4/db_1
However, Oracle 19c has a fun feature that whenever they run a patch on it, the db_1 changes incrementally, becoming db_2, then db_3, with each patch, etc.
So obviously I can't hardcode the ORACLE_HOME path anymore.
In a bunch of my scripts, I'm pulling the current value from the ortab file, like this:
setenv ORACLE_SID DATABASE1
setenv ORACLE_HOME `cat /var/opt/oracle/oratab | sed 's/#.*//g' | grep -w $ORACLE_SID | awk -F: '{print $2;}'`
And this is working just fine, pulling the correct ORACLE_HOME path from the ortab file.
However, when I tried to do this in a makefile, like so:
ORACLE_SID=DATABASE1
ORACLE_HOME=`cat /var/opt/oracle/oratab | sed 's/#.*//g' | grep -w $ORACLE_SID | awk -F: '{print $2;}'`
I get this error when I try to run make:
$ make
`cat /var/opt/oracle/oratab | sed 's//bin/proc sys_include=/usr/include lines=yes iname=file1.pc oname=file1.c include=/path/to/include
First RE may not be null
*** Error code 2
make: Fatal error: Command failed for target `file1.o'
So obviously the command isn't working the way I'm expecting, but I am unsure how to fix it.
How do I fix the command to work inside makefile? I'm running Solaris 11.
This is not GNU make, this is just the default make that comes with Solaris 11.
Adding more information:
My ortab file looks like this:
$cat /var/opt/oracle/oratab
DATABASE_TEST:/opt/app/oracle/product/11.2.0.4/db_7:Y
DATABASE1:/opt/app/oracle/product/19.0.0.0/db_3:N
DATABASE2:/opt/app/oracle/product/11.2.0.4/db_13:Y
DATABASE3:/opt/app/oracle/product/11.2.0.4/db_1:Y
DATABASE_PROD:/opt/app/oracle/product/11.2.0.4/db_2:Y
So, what I need to do, is using the ORACLE_SID of DATABASE1, pull out the /opt/app/oracle/product/19.0.0.0/db_3 part, to use as my ORACLE_HOME directory in the makefile.
Update:
Based on an answer below from MadScientist , this is now my makefile:
ORACLE_SID=DATABASE1
#ORACLE_HOME = /opt/app/oracle/product/19.0.0/db_3
ORACLE_HOME = `cat /var/opt/oracle/oratab | sed 's/\#.*//g' | grep -w ${ORACLE_SID} | awk -F: '{print $$2;}'`
PROC=${ORACLE_HOME}/bin/proc
E_INCLUDE=/path/to/include
print-% : ; #echo $* = $($*)
file1.o: file1.pc
${PROC} sys_include=/usr/include lines=yes iname=$*.pc oname=$*.c include=${E_INCLUDE}
When I hardcode ORACLE_HOME, everything works correctly.
When I try to use the dynamically created ORACLE_HOME, I get this error:
$ make
`cat /var/opt/oracle/oratab | sed 's/\#.*//g' | grep -w DATABASE1 | awk -F: '{print $2;}'`/bin/proc sys_include=/usr/include lines=yes iname=file1.pc oname=file1.c include=/path/to/include
make: Fatal error: Command failed for target `file1.o'
So it looks like it's setting ORACLE_HOME as the command itself, rather than as the result of the command.
Weirdly, when I run make print-ORACLE_HOME, I get the expected result /opt/app/oracle/product/19.0.0/db_3
Well, certainly this:
setenv ORACLE_HOME /opt/app/oracle/product/11.2.0.4/db_1
could not have been in your makefile before because this is not valid makefile syntax. Also, it surprises me that anyone is still using csh for anything, and especially scripting, in 2020. But anyway.
The problem you're having is that makefiles are not shell scripts and the rules of syntax are different. Of course a makefile contains shell scripts inside of it, but only in recipes: here you're setting a makefile variable. So just plopping a shell statement down into a variable assignment very well might not work.
Here you have three problems: first, variable reference in makefiles are of the form $(FOO) or ${FOO} but not $FOO. Second, a # is considered a comment character in a makefile and must be escaped. And finally, if you do want an actual $ not a variable reference you have to escape it, as: $$. Fixing those, this should work but note that there are likely simpler ways to do this:
ORACLE_SID = DATABASE1
ORACLE_HOME = `cat /var/opt/oracle/oratab | sed 's/\#.*//g' | grep -w $(ORACLE_SID) | awk -F: '{print $$2;}'`
You say that after this, this rule:
PROC=${ORACLE_HOME}/bin/proc
file1.o: file1.pc
${PROC} sys_include=/usr/include lines=yes iname=$*.pc oname=$*.c include=${E_INCLUDE}
Gives this output:
$ make
`cat /var/opt/oracle/oratab | sed 's/\#.*//g' | grep -w DATABASE1 | awk -F: '{print $2;}'`/bin/proc sys_include=/usr/include lines=yes iname=file1.pc oname=file1.c include=/path/to/include
make: Fatal error: Command failed for target `file1.o'
That error message is not very helpful and means nothing. It's a shame this doesn't give a better message.
I recommend you change the rule to this:
file1.o: file1.pc
echo PROC=\'${PROC}\'; ${PROC} sys_include=/usr/include lines=yes iname=$*.pc oname=$*.c include=${E_INCLUDE}
then, you should see something like this in the output:
$ make
echo PROC=\'`cat /var/opt/oracle/oratab...lots of stuff...
PROC='/...'
make: Fatal error: Command failed for target `file1.o'
What you want to look at is the second line of the output, PROC='/...' and examine that path /..., whatever it is, to make sure it looks right. Also it should not contain any whitespace or other special characters, etc.
If that value that is printed looks wrong, you'll have to fix your script to make it right. If it looks right, then I have no idea what's going on and it must be something particular about the version of make you're using.
Here's a simplified example to start. This initial version uses awk to search, drops the sed absent any # comments
ORACLE_SID := DATABASE1
ORACLE_HOME := $(shell awk -F: "/^$(ORACLE_SID)/ { print \$$2; }" /var/opt/oracle/oratab)
(Update) possible Solaris version from documents, unverified:
ORACLE_HOME:sh = awk -F: '/^DATABASE1/ { print $$2; }' /var/opt/oracle/oratab
I finally got it to work.
Using #Milag 's answer regarding the :sh command substitution, I was able to find the documentation regarding that for Solaris (not GNU) make: link to documentation
THIS was the answer:
ORACLE_HOME :sh =cat /var/opt/oracle/oratab | sed 's/\#.*//g' | grep -w DATABASE1 | awk -F: '{print $2;}'
The key? NOT putting a double dollar sign, as the documentation said:
"In contrast to commands in rules, the command is not subject for macro substitution; therefore, a dollar sign ($) need not be replaced with a double dollar sign ($$)."
Because of this I also had to hardcode DATABASE1 instead of using the ${ORACLE_SID} variable, but since that value will never change, I can live with that.
Related
I had a makefile I used to create jobs in one step then submit them to a cluster in another, then finally merge them. This worked well and I had no issues with it, I was using a python script to break csv files down into smaller ones in the create-jobs step however, and it took longer than I'd like so I decided to add a step to do this with bash code instead, thinking it would be faster.
I laid the new step out in the Makefile in the exact same way as the others, and created a small bash script that would accept the name of the csv, how many parts to split it into and eventually I was going to add a variable for the location to place the output csves but I tried testing it with it just sticking them in the same directory as itself first. I first defined the variables that would later be passed on in the bash file so that I could check it worked by itself, it does. Those variables are commented out now but this is the bash file in question:
#!/bin/bash
#INPUT_DATA="/scratch/cb27g11/Looping_Bash/missing.csv"
#nJobs=2
tail -n +2 ${INPUT_DATA} | split -l ${nJobs} - split_
for file in split_*
do
head -n 1 ${INPUT_DATA} > tmp_file
cat $file >> tmp_file
mv -f tmp_file ../$file
done
Nothing super complicated.
The Makefile looks like this:
########################
### --- Split jobs - ###
########################
SPLIT_JOB_NAME = "Name_for_job"
SPLIT_JOB_INPUT_DATA = "/scratch/cb27g11/Looping_Bash/missing.csv"
SPLIT_JOB_HEADER = "header/default.header"
SPLIT_JOB_nJobs = 2
#######################
DECLARE_ROOT_DIR = ROOT_DIR="${THDM_T3PS_SCANNER_DIR}/job_submission/MadGraph/"
VAR_CREATE_JOB = $(shell echo '$(.VARIABLES)' | awk -v RS=' ' '/CREATE_JOB_/' | sed 's/CREATE_JOB_//g' )
EXPORT_CREATE_JOB = $(foreach v,$(VAR_CREATE_JOB),$(v)="$(CREATE_JOB_$(v))") $(DECLARE_ROOT_DIR)
VAR_SUBMIT_JOB = $(shell echo '$(.VARIABLES)' | awk -v RS=' ' '/SUBMIT_JOB_/' | sed 's/SUBMIT_JOB_//g' )
EXPORT_SUBMIT_JOB = $(foreach v,$(VAR_SUBMIT_JOB),$(v)="$(SUBMIT_JOB_$(v))") $(DECLARE_ROOT_DIR)
VAR_MERGE_JOB = $(shell echo '$(.VARIABLES)' | awk -v RS=' ' '/MERGE_JOB_/' | sed 's/MERGE_JOB_//g' )
EXPORT_MERGE_JOB = $(foreach v,$(VAR_MERGE_JOB),$(v)="$(MERGE_JOB_$(v))") $(DECLARE_ROOT_DIR)
VAR_SPLIT_JOB = $(shell echo '$(.VARIABLES)' | awk -v RS=' ' '/SPLIT_JOB_/' | sed 's/SPLIT_JOB_//g' )
EXPORT_SPLIT_JOB = $(foreach v,$(VAR_SPLIT_JOB),$(v)="$(SPLIT_JOB_$(v))") $(DECLARE_ROOT_DIR)
#########################################################
create-jobs:
#$(EXPORT_CREATE_JOB) ./utils/new_create-jobs.sh
submit-jobs:
#$(EXPORT_SUBMIT_JOB) ./utils/new_submit-jobs.sh
merge-jobs:
#$(EXPORT_MERGE_JOB) ./utils/merge-jobs.sh
split-jobs:
#$(EXPORT_SPLIT_JOB) ./utils/split-jobs.sh
I'm not currently using all of the variables set-up for split-jobs here but surely that doesn't matter? The other parts, create-jobs, submit-jobs and merge-jobs work just fine and I can't see that I've made a mistake in how I've set up split-jobs compared to them. So I guess there's something I'm missing about the bash file itself?
When I try to run make split-jobs I get this:
(THDM) [cb27g11#cyan53 MadGraph]$ make split-jobs
/bin/sh: ./utils/split-jobs.sh: Permission denied
make: *** [split-jobs] Error 126
Since this is on a shared cluster I don't have sudo rights or anything like that, could this be related to that somehow? Am I calling something wrong? Any help or suggestions would be much appreciated!
I can't really understand what all the shell commands are trying to do, but I'm completely confident that whatever they are doing there's a much simpler and more straightforward way to do it.
However, that doesn't have anything to do with your error, which is:
/bin/sh: ./utils/split-jobs.sh: Permission denied
Did you remember to give your shell script execute permissions:
chmod +x ./utils/split-jobs.sh
? Also, please remove the # from your makefile lines until AFTER your makefile is completely working, and show us the command line that make invoked (maybe once you see it, you will be able to figure out the problem for yourself). Using # is like trying to debug your program while sending all the output to /dev/null.
I have a Makefile where I want to load environment variables placed in .env file.
I am using the include directive to achieve this.
-include .env
I also have an help target to display the available tasks:
help: ## Displays help menu
grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}'
The problem is that when using together with the include directive, it doesnt work correctly. The help task just shows "Makefile" as name for all the targets.
The $(MAKEFILE_LIST) returns "Makefile, .env" instead of the target names, so I guess it became messed up with the .env somehow.
I don´t know enough about Makefiles to understand what´s wrong.
Any ideas?
Thanks you.
Solution
Update your help target to leverage $(firstword $(MAKEFILE_LIST)) instead of just $(MAKEFILE_LIST) so that grep is targeting the correct file before it pipes the targets with ## comments.
help: ## Displays help menu
grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(firstword $(MAKEFILE_LIST)) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}'
Explanation
The original behavior of the $MAKEFILE_LIST variable is to output Makefile rather than a list of the Makefile's that are being included. So after your include, it's appending the name to the list. So you see what you mentioned Makefile, .env. The grep command that you posted for the help target is taking $MAKEFILE_LIST and grepping .env along with Makefile. Since that pattern for comment / help messages doesn't match anything in .env there isn't any output, but the Makefile does contain matches. This leads grep to prepend the FILENAME in which it found things in. In this case, Makefile. The awk command then rips everything before ## and strips them to a : colon, which grep added and it reads that as the target rather than what the target is in your Makefile. So all your help output has the correct comment but the wrong target. You can verify the behavior by running the following grep command in the directory where both these files exist:
grep -E '^[a-zA-Z_-]+:.*?## .*$$' Makefile .env
I need to identify the major version of a software, to do this it's principally to execute
command --version | head | awk -F "." '{print $1}'
Now, i need to assign this output to a AC_DEFINE variable.
I've tried;
majorvar=$(command --version | head | awk -F "." '{print $1}')
AC_DEFINE_UNQUOTED([myVar],["$majorvar"],[description])
here 'command' appeared in the resulting config.h file
and
AC_DEFINE([myVar],0,[descr])
AC_SUBST([myVar],[$(command --version | head | awk -F "." '{print $1}')])
where the value set by the define (0) is appearing in the result.
Is it possible to obtain the result that I want, i.e
#define myVar 5
or am I going about this in the wrong way, and if so, how should I go about it?
BR/Patrik
I needed several month to wrap my head around the concept of macros.
As the name already tells, macros are 'bigger' than scripts. I hope this example can help.
AC_DEFUN([VERSION],
[$(test -f $srcdir/VERSION && cat $srcdir/VERSION || AS_ECHO(["$PACKAGE_VERSION"]); )])
am_stable_version_rx='[[1-9]\.[0-9]+(\.[0-9]+)?]'
am_beta_version_rx="[$am_stable_version_rx[bdfhjlnprtvxz]]"
am_release_type=`AS_ECHO(["$VERSION"]) | LC_ALL=C awk ["
/^$am_stable_version_rx$/ { print \"stable\"; exit(0); }
/^$am_beta_version_rx$/ { print \"beta version\"; exit(0); }
{ print \"development snapshot\"; }"]`
test "$am_release_type" = stable || cat <<EOF
WARNING: You are about to use a $am_release_type of AC_PACKAGE_NAME VERSION.
WARNING: It might easily suffer from new bugs or regressions.
WARNING: You are strongly advised not to use it in production code.
Please report bugs, problems and feedback to <AC_PACKAGE_BUGREPORT>.
EOF
The code snippet defines:
a macro function DEFUN named VERSION
a script variable with regex for versioning
a script variable assignment containing a macro AS_ECHO which places the VERSION script during the generation of the configure script.
I have been struggling like for two hours to figure out the issue regarding this script of mine. When I used it statically without any variable it fetches the grep results, but when I put them with those variables, i keep receiving error and no results. I believe there is something wrong with the special character escape which I can not handle.
I have the file FLAGS_IN with this structure :
automotive_susan_s dataset1 -funsafe-math-optimizations -fno-guess-branch-probability -fno-ivopts -fno-inline-functions -fno-omit-frame-pointer -fselective-scheduling -fno-inline-small-functions -fno-tree-pre -ftracer -fno-move-loop-invariants
that have the flags for i in AppName and the dataset$j as structured above. Could anyone help me figure out what is wrong with this part of my sh script?
GCC_OPT="-O3"
OPT_FLAGS=$("grep $i\ dataset$j\ $FLAGS_IN|sed\ s/$i\ dataset$j//g")
echo $GCC_OPT
echo $OPT_FLAGS
echo "found the validated flags, they are \n $GCC_OPT $OPT_FLAGS"
make -f Makefile.gcc -j4 CCC_OPTS="$GCC_OPT\ $OPT_FLAG"
You're a little overzealous with your quoting. Also, it's a little easier to use cut here than sed.
OPT_FLAGS=$(grep "$i dataset$j" FLAGS_IN | cut -d " " -f3-)
and
make -f Makefile.gcc -j4 CCC_OPTS="$GCC_OPT $OPT_FLAG"
Is this what you're trying to do:
$ cat file
foo
automotive_susan_s dataset1 -funsafe-math-optimizations ...
bar
$ i=automotive_susan_s
$ j=1
$ sed -n "s/$i dataset$j//p" file
-funsafe-math-optimizations ...
Let's say that during your workday you repeatedly encounter the following form of columnized output from some command in bash (in my case from executing svn st in my Rails working directory):
? changes.patch
M app/models/superman.rb
A app/models/superwoman.rb
in order to work with the output of your command - in this case the filenames - some sort of parsing is required so that the second column can be used as input for the next command.
What I've been doing is to use awk to get at the second column, e.g. when I want to remove all files (not that that's a typical usecase :), I would do:
svn st | awk '{print $2}' | xargs rm
Since I type this a lot, a natural question is: is there a shorter (thus cooler) way of accomplishing this in bash?
NOTE:
What I am asking is essentially a shell command question even though my concrete example is on my svn workflow. If you feel that workflow is silly and suggest an alternative approach, I probably won't vote you down, but others might, since the question here is really how to get the n-th column command output in bash, in the shortest manner possible. Thanks :)
You can use cut to access the second field:
cut -f2
Edit:
Sorry, didn't realise that SVN doesn't use tabs in its output, so that's a bit useless. You can tailor cut to the output but it's a bit fragile - something like cut -c 10- would work, but the exact value will depend on your setup.
Another option is something like: sed 's/.\s\+//'
To accomplish the same thing as:
svn st | awk '{print $2}' | xargs rm
using only bash you can use:
svn st | while read a b; do rm "$b"; done
Granted, it's not shorter, but it's a bit more efficient and it handles whitespace in your filenames correctly.
I found myself in the same situation and ended up adding these aliases to my .profile file:
alias c1="awk '{print \$1}'"
alias c2="awk '{print \$2}'"
alias c3="awk '{print \$3}'"
alias c4="awk '{print \$4}'"
alias c5="awk '{print \$5}'"
alias c6="awk '{print \$6}'"
alias c7="awk '{print \$7}'"
alias c8="awk '{print \$8}'"
alias c9="awk '{print \$9}'"
Which allows me to write things like this:
svn st | c2 | xargs rm
Try the zsh. It supports suffix alias, so you can define X in your .zshrc to be
alias -g X="| cut -d' ' -f2"
then you can do:
cat file X
You can take it one step further and define it for the nth column:
alias -g X2="| cut -d' ' -f2"
alias -g X1="| cut -d' ' -f1"
alias -g X3="| cut -d' ' -f3"
which will output the nth column of file "file". You can do this for grep output or less output, too. This is very handy and a killer feature of the zsh.
You can go one step further and define D to be:
alias -g D="|xargs rm"
Now you can type:
cat file X1 D
to delete all files mentioned in the first column of file "file".
If you know the bash, the zsh is not much of a change except for some new features.
HTH Chris
Because you seem to be unfamiliar with scripts, here is an example.
#!/bin/sh
# usage: svn st | x 2 | xargs rm
col=$1
shift
awk -v col="$col" '{print $col}' "${#--}"
If you save this in ~/bin/x and make sure ~/bin is in your PATH (now that is something you can and should put in your .bashrc) you have the shortest possible command for generally extracting column n; x n.
The script should do proper error checking and bail if invoked with a non-numeric argument or the incorrect number of arguments, etc; but expanding on this bare-bones essential version will be in unit 102.
Maybe you will want to extend the script to allow a different column delimiter. Awk by default parses input into fields on whitespace; to use a different delimiter, use -F ':' where : is the new delimiter. Implementing this as an option to the script makes it slightly longer, so I'm leaving that as an exercise for the reader.
Usage
Given a file file:
1 2 3
4 5 6
You can either pass it via stdin (using a useless cat merely as a placeholder for something more useful);
$ cat file | sh script.sh 2
2
5
Or provide it as an argument to the script:
$ sh script.sh 2 file
2
5
Here, sh script.sh is assuming that the script is saved as script.sh in the current directory; if you save it with a more useful name somewhere in your PATH and mark it executable, as in the instructions above, obviously use the useful name instead (and no sh).
It looks like you already have a solution. To make things easier, why not just put your command in a bash script (with a short name) and just run that instead of typing out that 'long' command every time?
If you are ok with manually selecting the column, you could be very fast using pick:
svn st | pick | xargs rm
Just go to any cell of the 2nd column, press c and then hit enter
Note, that file path does not have to be in second column of svn st output. For example if you modify file, and modify it's property, it will be 3rd column.
See possible output examples in:
svn help st
Example output:
M wc/bar.c
A + wc/qax.c
I suggest to cut first 8 characters by:
svn st | cut -c8- | while read FILE; do echo whatever with "$FILE"; done
If you want to be 100% sure, and deal with fancy filenames with white space at the end for example, you need to parse xml output:
svn st --xml | grep -o 'path=".*"' | sed 's/^path="//; s/"$//'
Of course you may want to use some real XML parser instead of grep/sed.