How to run regression using makefile - shell

I have tcsh shell. I want to compile once which is VCS and then run multiple testcases using SIMV. Earlier for single test VCS = vcs -sverilog -timescale=1ns/1ps \ +acc +vpi .. and SIMV = ./simv +UVM_VERBOSITY=$(UVM_VERBOSITY) +UVM_TESTNAME=$(TESTNAME) ${vcs_waves_cmd} -l $(TESTNAME).log were defined as constants.
I have to replace $(TESTNAME) by looping on an array.I tried as below by switching to bash but ultimately it is causing other failures such as make cleannot working.
TESTS = ext_reg_write_read reg_write_read
regress: $(TESTS)
$(VCS)\
for t in $(TESTS); do\
./simv +UVM_VERBOSITY=$(UVM_VERBOSITY) +UVM_TESTNAME=$$t ${vcs_waves_cmd} -l $$t.log;\
done
Also I would like to add export shell command export SHELL = /bin/csh -f
My question is similar to following – Implementing `make check` or `make test`
I have used #J. C. Salomon 's answer to make this code

The problem is with export SHELL = /bin/csh -f which I was changing to export SHELL = /bin/bash -f.
But finally SHELL := /bin/bash works as answered in How can I use Bash syntax in Makefile targets? by #derobert

Related

How to create and set a system-wide environmental variable in Ubuntu through makefile?

I'm trying to create a system-wide environmental variable TEST_ENV_ONE.
I want to use it right after executing makefile without logout and after rebooting. So I'm trying to repeat manual moves like export variable and write it ti /etc/environment
I wrote a makefile like this, but it doesn't work:
var_value := some_string
TEST_ENV_ONE := $(var_value)
vars:
$(shell export TEST_ENV_ONE=$(var_value))
grep 'TEST_ENV_ONE=' /etc/environment || "TEST_ENV_ONE=\"$(var_value)\"" | sudo tee -a /etc/environment > /dev/null
What you want to do is basically impossible on a POSIX system as you've stated it. The environment of a process is inherited from its parent (the process that started it) and once a process is running, its environment cannot ever be changed externally. That includes by its children, or by modifying some other file.
You can, by modifying /etc/environment, change the environment for new logins but this will not change the environment of any existing shell or its child.
That being said, your makefile also has a number of problems:
$(shell export TEST_ENV_ONE=$(var_value))
This is doubly-not right. First, it's an anti-pattern to use the make $(shell ...) function inside a recipe script. Recipes are already shell scripts so it's useless (and can lead to unexpected behavior) to use $(shell ...) with them.
Second, this is a no-op: what this does is start a shell, tell the shell to set an environment variable and export it, then the shell exits. When the shell exits, all the changes to its environment are lost (obviously, because it exited!) So this does nothing.
Next:
grep 'TEST_ENV_ONE=' /etc/environment || "TEST_ENV_ONE=\"$(var_value)\"" | sudo tee -a /etc/environment > /dev/null
This does nothing because the statement "TEST_ENV_ONE=\"$(var_value)\"" sets an environment variable but generates no output, so there's no input to the sudo tee command and nothing happens. I expect you forgot an echo command here:
grep 'TEST_ENV_ONE=' /etc/environment || echo TEST_ENV_ONE=\"$(var_value)\" | sudo tee -a /etc/environment > /dev/null
However as I mention above, modifying /etc/environment will only take effect for new logins to the system, it won't modify any existing login or shell.

Problem handling enviroment variable when launching terminal from bash script

The following script gets called with an enviroment variable setted.
I need to launch a terminal and inside that terminal read that variable from another script ( script.sh ).
xfce4-terminal -x sh -c \
"export VAR='${VAR}'
/home/usr/scripts/script.sh"
It works but not when VAR has single quotes in it.
I also feel like there is a better way to pass enviroment variable to the terminal but I don't know how.
I really appreciate any kind of help and I'm sorry for my english.
One of the intended features of the environment is that you can add to it, but you never remove things from it. Add VAR to the current environment, and it will be inherited by xfce4-terminal and any process started by that terminal.
export VAR
xfce4-terminal -x sh -c /home/usr/scripts/script.sh
If you don't want it in the current environment, only in the new terminal's, then use a precommend assignment.
VAR="$VAR" xfce4-terminal -x sh -c /home/usr/scripts/script.sh
This avoids any fragile dynamic script construction like you are contending with.
Since xfce4-terminal appears to not fork a new process itself, I would pass the desired value as an argument to sh.
xfce4-terminal -x sh -c 'VAR="$1" /home/usr/scripts/script.sh' _ "$VAR"
The argument to -c is still a fixed string rather than one generated by interpolating the value of $VAR.

Cron doesn't accept bash syntax

I have a bashscript that I'm running with crontab. Unfortunately, a script that works fine when run manually fails with the error:
Syntax error: "(" unexpected (expecting "}")
Where the line in question is line 22 which is:
declare -a PREV_TOTAL=( $(for i in ${range[#]}; do echo 0; done) )
In the larger context:
TOTAL_CPU_USAGE=0
TOTAL_CPU=$(grep -c ^processor /proc/cpuinfo) #set number of CPUs to check for
declare -a 'range=({'"0..$TOTAL_CPU"'})'
let "TOTAL_CPU=$TOTAL_CPU - 1"
#declare array of size TOTAL_CPU to store values (eg. 8 cpus makes arrays of size 8)
declare -a PREV_TOTAL=( $(for i in ${range[#]}; do echo 0; done) )
declare -a PREV_IDLE=( $(for i in ${range[#]}; do echo 0; done) )
This works when manually just fine, but I don't understand what I'm doing wrong that causes cron to give this error? If you know I'd be very appreciative. Thanks.
EDIT: My crontab looks like this:
# m h dom mon dow command
SHELL=/bin/bash
#reboot cd /home/ubuntu/waste-cloud-computing/probe && probe.sh >> /var/log/somelogfile.log 2>&1
And I access it with sudo crontab -e. I'm still getting the issue while providing the SHELL variable.
EDIT 1: Thanks to some help I got past the syntax issues by ensuring the shell was using bash. Now I get the error, /bin/bash: probe.bash: command not found. I assume its some kind of PATH issue, but which bash returns /bin/bash so it seems normal to me. Maybe someone knows what's up?
cron jobs are run by sh by default, not bash. If you are using ubuntu/vixiecron, you can set the SHELL env variable at the top of the crontab to make cron run the commands in your crontab with bash.
SHELL=/bin/bash
If the script you want to be run is a bash script, make sure you have a shebang at the first line:
#!/bin/bash
Also note that there will be other potential troubleshooting steps if your scripts depend on a particular user's profile, env vars, etc. depending on which crontab you are editing.
Thanks to the help of the people here I found my issue was not syntax but rather the use of sh over bash. This was fixed by setting the crontab this way so future users can see:
# m h dom mon dow command
SHELL=/bin/bash
#reboot cd /home/ubuntu/waste-cloud-computing/probe && ./probe.sh >> /var/log/somelogfile.log 2>&1
The key points are the SHELL variable being set and the ./ before running the script.

GNU Parallel in BASH script with "export -f <func>" failed with "Command Not Found" error when Crond

My script works if I run it interactively on command shell:
$ cat ndmpcopy_cron_parallel_svlinf05.bash
#!/usr/software/bin/bash
ndmpcopy_cron_parallel() {
timestamp=`date +%Y%m%d-%H%M`
LOG=/x/eng/itarchives/ndmpcopylogs/05_$1/ndmpcopy_status
TSLOG=${LOG}_$timestamp
src_filer='svlinf05'
src_account='ndmp'
src_passwd='src_passwd'
dst_svm='svlinfsrc'
dst_account='vsadmin-backup'
dst_passwd='dst_passwd'
host=`hostname`
echo $host
ssh -l root $src_filer "priv set -q diag ; ndmpcopy -sa $src_account:$src_passwd -da $dst_account:$dst_passwd -i $src_filer.eng.netapp.com:/vol/$1 10.56.10.161:/$dst_svm/$1" | tee -a $TSLOG
echo "ndmpcopy Completed: `date` "
}
export -f ndmpcopy_cron_parallel
/u/jsung/bin/parallel -j 0 --wd . --env ndmpcopy_cron_parallel --eta ndmpcopy_cron_parallel ::: local
But, the script failed and complained the exported function, ndmpcopy_cron_parallel, cannot be found:
$ crontab -l
40 0,2,4,6,8,10,12,14,16,18,20,22 * * * /u/jsung/bin/ndmpcopy_cron_parallel_svlinf05.bash
Error:
Subject: Cron <jsung#cycrh6svl18> /u/jsung/bin/ndmpcopy_cron_parallel_svlinf05.bash
Computers / CPU cores / Max jobs to run
1:local / 2 / 1
Computer:jobs running/jobs completed/%of started jobs/Average seconds to complete
ETA: 0s Left: 1 AVG: 0.00s local:1/0/100%/0.0s **/bin/bash: ndmpcopy_cron_parallel: command not found**
ETA: 0s Left: 0 AVG: 0.00s local:0/1/100%/0.0s
I've been searched around and trying different things for a while. I even tweaked $PATH. Not sure what I missed. Can we embed GNU Parallel in BASH script and put in crontab at all?
Congratulations. You've been shell-shocked.
You have two versions of bash installed on your system:
/bin/bash v4.1.2 An older unpatched bash
/usr/software/bin/bash v4.2.53 A middle-aged bash, patched against Shellshock
The last number in the bash version triple is the patch-level. The Shellshock bug involved a number of patches, but the relevant one is 4.1.14, 4.2.50 and 4.3.27. That patch changes the format of exported functions, with the consequence that:
If you export a function from a pre-shellshock bash to a post-shellshock bash, you will see a warning and the exported function will be rejected.
If you export a function from a post-shellshock bash to a pre-shellshock bash, the function export format won't be recognized so it will be silently ignored.
In both cases, the function will not be exported. In other words, you can only export a function between two bash versions if they have both been shellshock patched, or if neither have been shellshock patched.
Your script clearly indicates which bash to use to run it: the one in /usr/software/bin/bash, which has been patched. The script invokes GNU parallel, and GNU parallel then has to start up one or more subshells in order to run the commands. GNU parallel uses the value of the SHELL environment variable to find the shell it should use.
I suppose that in your user command shell environment, SHELL is set to /usr/software/bin/bash, and that in the environment in which cron executes, it is set to /bin/bash. If that's the case, you'll have no problems exporting the function when you try it from a bash prompt, but in the cron environment you will end up trying to export a function from a post-shellshock bash to a pre-shellshock bash, and as described above the result is that the export is silently ignored. Hence the error.
To get around the problem, you need to ensure that you use the bash used to run the command script is the same as the bash used by GNU parallel. You could, for example, explicitly set the shell prior to invoking GNU parallel.
export SHELL=/usr/software/bin/bash
# ...
/u/jsung/bin/parallel -j 0 --wd . --env ndmpcopy_cron_parallel --eta ndmpcopy_cron_parallel ::: local
Or you could just set it for the parallel command itself:
SHELL=/usr/software/bin/bash /u/jsung/bin/parallel -j 0 --wd . --env ndmpcopy_cron_parallel --eta ndmpcopy_cron_parallel ::: local
As rici says, the problem is most likely due to shellshock. Shellshock did not affect GNU Parallel, but the patches to fix shellshock broke transferring of functions using '--env'.
GNU Parallel is catching up with the shellshock patches in Bash: Bash has used BASH_FUNC_myfunc() as the variable name for exporting functions, but more recent versions use BASH_FUNC_myfunc%%. So GNU Parallel needs to know this when transferring a function.
The '()' version is fixed in 20141022, and the '%%' version is expected to be fixed in 20150122. They should work in any combination. So your remote Bash does not need to be patched the same way as your local Bash: GNU Parallel will "do the right thing", and there is no need to change your own code.
You should feel free to test out the git version in which both are fixed: git clone git://git.savannah.gnu.org/parallel.git

Simple Bash Script Error and Advice - Saving Environment Variables in Linux

I am working on a project that is hosted in Heroku. The app is hard coded to use Amazon S3 and looks for the keys in environment variables. This is what I wrote after looking at some examples and I am not sure why its not working.
echo $1
if [ $1 != "unset" ]; then
echo "set"
export AMAZON_ACCESS_KEY_ID=XXXXXXXXXXXX
export AMAZON_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
export S3_BUCKET_NAME=XXXXXXXXX
else
echo "unset"
export AMAZON_ACCESS_KEY_ID=''
export AMAZON_SECRET_ACCESS_KEY=''
export S3_BUCKET_NAME=''
fi
When running the script it goes to the set section. But when inspecting through echo $AMAZON_ACCESS_KEY_ID # => ''.
I am not sure what is causing the issue. I will be interested in...
A fix for this...
A way to extract and add heroku config variables in the the env in an easier way.
You need to source the script, not run it as a child. If you run the script directly, its environment disappears when it ends. Sourcing the script causes it to be executed in the current environment. help source for more information.
Example:
$ VAR=old_value
$ cat script.sh
#!/bin/bash
export VAR=new_value
$ ./script.sh
$ echo $VAR
old_value
$ source script.sh
$ echo $VAR
new_value
Scripts executed with source don't need to be executable nor do they need the "shebang" line (#!/bin/bash) because they are not run as separate processes. In fact, it is probably a good idea to not make them executable in order to avoid them being run as commands, since that won't work as expected.

Resources