Run a string as a command within a Bash script - bash

I have a Bash script that builds a string to run as a command
Script:
#! /bin/bash
matchdir="/home/joao/robocup/runner_workdir/matches/testmatch/"
teamAComm="`pwd`/a.sh"
teamBComm="`pwd`/b.sh"
include="`pwd`/server_official.conf"
serverbin='/usr/local/bin/rcssserver'
cd $matchdir
illcommando="$serverbin include='$include' server::team_l_start = '${teamAComm}' server::team_r_start = '${teamBComm}' CSVSaver::save='true' CSVSaver::filename = 'out.csv'"
echo "running: $illcommando"
# $illcommando > server-output.log 2> server-error.log
$illcommando
which does not seem to supply the arguments correctly to the $serverbin.
Script output:
running: /usr/local/bin/rcssserver include='/home/joao/robocup/runner_workdir/server_official.conf' server::team_l_start = '/home/joao/robocup/runner_workdir/a.sh' server::team_r_start = '/home/joao/robocup/runner_workdir/b.sh' CSVSaver::save='true' CSVSaver::filename = 'out.csv'
rcssserver-14.0.1
Copyright (C) 1995, 1996, 1997, 1998, 1999 Electrotechnical Laboratory.
2000 - 2009 RoboCup Soccer Simulator Maintenance Group.
Usage: /usr/local/bin/rcssserver [[-[-]]namespace::option=value]
[[-[-]][namespace::]help]
[[-[-]]include=file]
Options:
help
display generic help
include=file
parse the specified configuration file. Configuration files
have the same format as the command line options. The
configuration file specified will be parsed before all
subsequent options.
server::help
display detailed help for the "server" module
player::help
display detailed help for the "player" module
CSVSaver::help
display detailed help for the "CSVSaver" module
CSVSaver Options:
CSVSaver::save=<on|off|true|false|1|0|>
If save is on/true, then the saver will attempt to save the
results to the database. Otherwise it will do nothing.
current value: false
CSVSaver::filename='<STRING>'
The file to save the results to. If this file does not
exist it will be created. If the file does exist, the results
will be appended to the end.
current value: 'out.csv'
if I just paste the command /usr/local/bin/rcssserver include='/home/joao/robocup/runner_workdir/server_official.conf' server::team_l_start = '/home/joao/robocup/runner_workdir/a.sh' server::team_r_start = '/home/joao/robocup/runner_workdir/b.sh' CSVSaver::save='true' CSVSaver::filename = 'out.csv' (in the output after "runnning: ") it works fine.

You can use eval to execute a string:
eval $illcommando

your_command_string="..."
output=$(eval "$your_command_string")
echo "$output"

I usually place commands in parentheses $(commandStr), if that doesn't help I find bash debug mode great, run the script as bash -x script

don't put your commands in variables, just run it
matchdir="/home/joao/robocup/runner_workdir/matches/testmatch/"
PWD=$(pwd)
teamAComm="$PWD/a.sh"
teamBComm="$PWD/b.sh"
include="$PWD/server_official.conf"
serverbin='/usr/local/bin/rcssserver'
cd $matchdir
$serverbin include=$include server::team_l_start = ${teamAComm} server::team_r_start=${teamBComm} CSVSaver::save='true' CSVSaver::filename = 'out.csv'

./me casts raise_dead()
I was looking for something like this, but I also needed to reuse the same string minus two parameters so I ended up with something like:
my_exe ()
{
mysql -sN -e "select $1 from heat.stack where heat.stack.name=\"$2\";"
}
This is something I use to monitor openstack heat stack creation. In this case I expect two conditions, an action 'CREATE' and a status 'COMPLETE' on a stack named "Somestack"
To get those variables I can do something like:
ACTION=$(my_exe action Somestack)
STATUS=$(my_exe status Somestack)
if [[ "$ACTION" == "CREATE" ]] && [[ "$STATUS" == "COMPLETE" ]]
...

Here is my gradle build script that executes strings stored in heredocs:
current_directory=$( realpath "." )
GENERATED=${current_directory}/"GENERATED"
build_gradle=$( realpath build.gradle )
## touch because .gitignore ignores this folder:
touch $GENERATED
COPY_BUILD_FILE=$( cat <<COPY_BUILD_FILE_HEREDOC
cp
$build_gradle
$GENERATED/build.gradle
COPY_BUILD_FILE_HEREDOC
)
$COPY_BUILD_FILE
GRADLE_COMMAND=$( cat <<GRADLE_COMMAND_HEREDOC
gradle run
--build-file
$GENERATED/build.gradle
--gradle-user-home
$GENERATED
--no-daemon
GRADLE_COMMAND_HEREDOC
)
$GRADLE_COMMAND
The lone ")" are kind of ugly. But I have no clue how to fix that asthetic aspect.

To see all commands that are being executed by the script, add the -x flag to your shabang line, and execute the command normally:
#! /bin/bash -x
matchdir="/home/joao/robocup/runner_workdir/matches/testmatch/"
teamAComm="`pwd`/a.sh"
teamBComm="`pwd`/b.sh"
include="`pwd`/server_official.conf"
serverbin='/usr/local/bin/rcssserver'
cd $matchdir
$serverbin include="$include" server::team_l_start="${teamAComm}" server::team_r_start="${teamBComm}" CSVSaver::save='true' CSVSaver::filename='out.csv'
Then if you sometimes want to ignore the debug output, redirect stderr somewhere.

For me echo XYZ_20200824.zip | grep -Eo '[[:digit:]]{4}[[:digit:]]{2}[[:digit:]]{2}'
was working fine but unable to store output of command into variable.
I had same issue I tried eval but didn't got output.
Here is answer for my problem:
cmd=$(echo XYZ_20200824.zip | grep -Eo '[[:digit:]]{4}[[:digit:]]{2}[[:digit:]]{2}')
echo $cmd
My output is now 20200824

Related

How to return output of shell script into Jenkinsfile [duplicate]

I have something like this on a Jenkinsfile (Groovy) and I want to record the stdout and the exit code in a variable in order to use the information later.
sh "ls -l"
How can I do this, especially as it seems that you cannot really run any kind of groovy code inside the Jenkinsfile?
The latest version of the pipeline sh step allows you to do the following;
// Git committer email
GIT_COMMIT_EMAIL = sh (
script: 'git --no-pager show -s --format=\'%ae\'',
returnStdout: true
).trim()
echo "Git committer email: ${GIT_COMMIT_EMAIL}"
Another feature is the returnStatus option.
// Test commit message for flags
BUILD_FULL = sh (
script: "git log -1 --pretty=%B | grep '\\[jenkins-full]'",
returnStatus: true
) == 0
echo "Build full flag: ${BUILD_FULL}"
These options where added based on this issue.
See official documentation for the sh command.
For declarative pipelines (see comments), you need to wrap code into script step:
script {
GIT_COMMIT_EMAIL = sh (
script: 'git --no-pager show -s --format=\'%ae\'',
returnStdout: true
).trim()
echo "Git committer email: ${GIT_COMMIT_EMAIL}"
}
Current Pipeline version natively supports returnStdout and returnStatus, which make it possible to get output or status from sh/bat steps.
An example:
def ret = sh(script: 'uname', returnStdout: true)
println ret
An official documentation.
quick answer is this:
sh "ls -l > commandResult"
result = readFile('commandResult').trim()
I think there exist a feature request to be able to get the result of sh step, but as far as I know, currently there is no other option.
EDIT: JENKINS-26133
EDIT2: Not quite sure since what version, but sh/bat steps now can return the std output, simply:
def output = sh returnStdout: true, script: 'ls -l'
If you want to get the stdout AND know whether the command succeeded or not, just use returnStdout and wrap it in an exception handler:
scripted pipeline
try {
// Fails with non-zero exit if dir1 does not exist
def dir1 = sh(script:'ls -la dir1', returnStdout:true).trim()
} catch (Exception ex) {
println("Unable to read dir1: ${ex}")
}
output:
[Pipeline] sh
[Test-Pipeline] Running shell script
+ ls -la dir1
ls: cannot access dir1: No such file or directory
[Pipeline] echo
unable to read dir1: hudson.AbortException: script returned exit code 2
Unfortunately hudson.AbortException is missing any useful method to obtain that exit status, so if the actual value is required you'd need to parse it out of the message (ugh!)
Contrary to the Javadoc https://javadoc.jenkins-ci.org/hudson/AbortException.html the build is not failed when this exception is caught. It fails when it's not caught!
Update:
If you also want the STDERR output from the shell command, Jenkins unfortunately fails to properly support that common use-case. A 2017 ticket JENKINS-44930 is stuck in a state of opinionated ping-pong whilst making no progress towards a solution - please consider adding your upvote to it.
As to a solution now, there could be a couple of possible approaches:
a) Redirect STDERR to STDOUT 2>&1
- but it's then up to you to parse that out of the main output though, and you won't get the output if the command failed - because you're in the exception handler.
b) redirect STDERR to a temporary file (the name of which you prepare earlier) 2>filename (but remember to clean up the file afterwards) - ie. main code becomes:
def stderrfile = 'stderr.out'
try {
def dir1 = sh(script:"ls -la dir1 2>${stderrfile}", returnStdout:true).trim()
} catch (Exception ex) {
def errmsg = readFile(stderrfile)
println("Unable to read dir1: ${ex} - ${errmsg}")
}
c) Go the other way, set returnStatus=true instead, dispense with the exception handler and always capture output to a file, ie:
def outfile = 'stdout.out'
def status = sh(script:"ls -la dir1 >${outfile} 2>&1", returnStatus:true)
def output = readFile(outfile).trim()
if (status == 0) {
// output is directory listing from stdout
} else {
// output is error message from stderr
}
Caveat: the above code is Unix/Linux-specific - Windows requires completely different shell commands.
this is a sample case, which will make sense I believe!
node('master'){
stage('stage1'){
def commit = sh (returnStdout: true, script: '''echo hi
echo bye | grep -o "e"
date
echo lol''').split()
echo "${commit[-1]} "
}
}
For those who need to use the output in subsequent shell commands, rather than groovy, something like this example could be done:
stage('Show Files') {
environment {
MY_FILES = sh(script: 'cd mydir && ls -l', returnStdout: true)
}
steps {
sh '''
echo "$MY_FILES"
'''
}
}
I found the examples on code maven to be quite useful.
All the above method will work. but to use the var as env variable inside your code you need to export the var first.
script{
sh " 'shell command here' > command"
command_var = readFile('command').trim()
sh "export command_var=$command_var"
}
replace the shell command with the command of your choice. Now if you are using python code you can just specify os.getenv("command_var") that will return the output of the shell command executed previously.
How to read the shell variable in groovy / how to assign shell return value to groovy variable.
Requirement : Open a text file read the lines using shell and store the value in groovy and get the parameter for each line .
Here , is delimiter
Ex: releaseModule.txt
./APP_TSBASE/app/team/i-home/deployments/ip-cc.war/cs_workflowReport.jar,configurable-wf-report,94,23crb1,artifact
./APP_TSBASE/app/team/i-home/deployments/ip.war/cs_workflowReport.jar,configurable-temppweb-report,394,rvu3crb1,artifact
========================
Here want to get module name 2nd Parameter (configurable-wf-report) , build no 3rd Parameter (94), commit id 4th (23crb1)
def module = sh(script: """awk -F',' '{ print \$2 "," \$3 "," \$4 }' releaseModules.txt | sort -u """, returnStdout: true).trim()
echo module
List lines = module.split( '\n' ).findAll { !it.startsWith( ',' ) }
def buildid
def Modname
lines.each {
List det1 = it.split(',')
buildid=det1[1].trim()
Modname = det1[0].trim()
tag= det1[2].trim()
echo Modname
echo buildid
echo tag
}
If you don't have a single sh command but a block of sh commands, returnstdout wont work then.
I had a similar issue where I applied something which is not a clean way of doing this but eventually it worked and served the purpose.
Solution -
In the shell block , echo the value and add it into some file.
Outside the shell block and inside the script block , read this file ,trim it and assign it to any local/params/environment variable.
example -
steps {
script {
sh '''
echo $PATH>path.txt
// I am using '>' because I want to create a new file every time to get the newest value of PATH
'''
path = readFile(file: 'path.txt')
path = path.trim() //local groovy variable assignment
//One can assign these values to env and params as below -
env.PATH = path //if you want to assign it to env var
params.PATH = path //if you want to assign it to params var
}
}
Easiest way is use this way
my_var=`echo 2`
echo $my_var
output
: 2
note that is not simple single quote is back quote ( ` ).

Nextflow: Missing output file(s) expected by process

I'm currently making a start on using Nextflow to develop a bioinformatics pipeline. Below, I've created a params.files variable which contains my FASTQ files, and then input this into fasta_files channel.
The process trimming and its scripts takes this channel as the input, and then ideally, I would output all the $sample".trimmed.fq.gz into the output channel, trimmed_channel. However, when I run this script, I get the following error:
Missing output file(s) `trimmed_files` expected by process `trimming` (1)
The nextflow script I'm trying to run is:
#! /usr/bin/env nextflow
params.files = files("$baseDir/FASTQ/*.fastq.gz")
println "fastq files for trimming:$params.files"
fasta_files = Channel.fromPath(params.files)
println "files in the fasta channel: $fasta_files"
process trimming {
input:
file fasta_file from fasta_files
output:
path trimmed_files into trimmed_channel
// the shell script to be run:
"""
#!/usr/bin/env bash
mkdir trimming_report
cd /home/usr/Nextflow
#Finding and renaming my FASTQ files
for file in FASTQ/*.fastq.gz; do
[ -f "\$file" ] || continue
name=\$(echo "\$file" | awk -F'[/]' '{ print \$2 }') #renaming fastq files.
sample=\$(echo "\$name" | awk -F'[.]' '{ print \$1 }') #renaming fastq files.
echo "Found" "\$name" "from:" "\$sample"
if [ ! -e FASTQ/"\$sample"_trimmed.fq.gz ]; then
trim_galore -j 8 "\$file" -o FASTQ #trim the files
mv "\$file"_trimming_report.txt trimming_report #moves to the directory trimming report
else
echo ""\$sample".trimmed.fq.gz exists skipping trim galore"
fi
done
trimmed_files="FASTQ/*_trimmed.fq.gz"
echo \$trimmed_files
"""
}
The script in the process works fine. However, I'm wondering if I'm misunderstanding or missing something obvious. If I've forgot to include something, please let me know and any help is appreciated!
Nextflow does not export the variable trimmed_files to its own scope unless you tell it to do so using the env output qualifier, however doing it that way would not be very idiomatic.
Since you know the pattern of your output files ("FASTQ/*_trimmed.fq.gz"), simply pass that pattern as output:
path "FASTQ/*_trimmed.fq.gz" into trimmed_channel
Some things you do, but probably want to avoid:
Changing directory inside your NF process, don't do this, it entirely breaks the whole concept of nextflow's /work folder setup.
Write a bash loop inside a NF process, if you set up your channels correctly there should only be 1 task per spawned process.
Pallie has already provided some sound advice and, of course, the right answer, which is: environment variables must be declared using the env qualifier.
However, given your script definition, I think there might be some misunderstanding about how best to skip the execution of previously generated results. The cache directive is enabled by default and when the pipeline is launched with the -resume option, additional attempts to execute a process using the same set of inputs, will cause the process execution to be skipped and will produce the stored data as the actual results.
This example uses the Nextflow DSL 2 for my convenience, but is not strictly required:
nextflow.enable.dsl=2
params.fastq_files = "${baseDir}/FASTQ/*.fastq.gz"
params.publish_dir = "./results"
process trim_galore {
tag { "${sample}:${fastq_file}" }
publishDir "${params.publish_dir}/TrimGalore", saveAs: { fn ->
fn.endsWith('.txt') ? "trimming_reports/${fn}" : fn
}
cpus 8
input:
tuple val(sample), path(fastq_file)
output:
tuple val(sample), path('*_trimmed.fq.gz'), emit: trimmed_fastq_files
path "${fastq_file}_trimming_report.txt", emit: trimming_report
"""
trim_galore \\
-j ${task.cpus} \\
"${fastq_file}"
"""
}
workflow {
Channel.fromPath( params.fastq_files )
| map { tuple( it.getSimpleName(), it ) }
| set { sample_fastq_files }
results = trim_galore( sample_fastq_files )
results.trimmed_fastq_files.view()
}
Run using:
nextflow run script.nf \
-ansi-log false \
--fastq_files '/home/usr/Nextflow/FASTQ/*.fastq.gz'

Detect empty command

Consider this PS1
PS1='\n${_:+$? }$ '
Here is the result of a few commands
$ [ 2 = 2 ]
0 $ [ 2 = 3 ]
1 $
1 $
The first line shows no status as expected, and the next two lines show the
correct exit code. However on line 3 only Enter was pressed, so I would like the
status to go away, like line 1. How can I do this?
Here's a funny, very simple possibility: it uses the \# escape sequence of PS1 together with parameter expansions (and the way Bash expands its prompt).
The escape sequence \# expands to the command number of the command to be executed. This is incremented each time a command has actually been executed. Try it:
$ PS1='\# $ '
2 $ echo hello
hello
3 $ # this is a comment
3 $
3 $ echo hello
hello
4 $
Now, each time a prompt is to be displayed, Bash first expands the escape sequences found in PS1, then (provided the shell option promptvars is set, which is the default), this string is expanded via parameter expansion, command substitution, arithmetic expansion, and quote removal.
The trick is then to have an array that will have the k-th field set (to the empty string) whenever the (k-1)-th command is executed. Then, using appropriate parameter expansions, we'll be able to detect when these fields are set and to display the return code of the previous command if the field isn't set. If you want to call this array __cmdnbary, just do:
PS1='\n${__cmdnbary[\#]-$? }${__cmdnbary[\#]=}\$ '
Look:
$ PS1='\n${__cmdnbary[\#]-$? }${__cmdnbary[\#]=}\$ '
0 $ [ 2 = 3 ]
1 $
$ # it seems that it works
$ echo "it works"
it works
0 $
To qualify for the shortest answer challenge:
PS1='\n${a[\#]-$? }${a[\#]=}$ '
that's 31 characters.
Don't use this, of course, as a is a too trivial name; also, \$ might be better than $.
Seems you don't like that the initial prompt is 0 $; you can very easily modify this by initializing the array __cmdnbary appropriately: you'll put this somewhere in your configuration file:
__cmdnbary=( '' '' ) # Initialize the field 1!
PS1='\n${__cmdnbary[\#]-$? }${__cmdnbary[\#]=}\$ '
Got some time to play around this weekend. Looking at my earlier answer (not-good) and other answers I think this may be probably the smallest answer.
Place these lines at the end of your ~/.bash_profile:
PS1='$_ret$ '
trapDbg() {
local c="$BASH_COMMAND"
[[ "$c" != "pc" ]] && export _cmd="$c"
}
pc() {
local r=$?
trap "" DEBUG
[[ -n "$_cmd" ]] && _ret="$r " || _ret=""
export _ret
export _cmd=
trap 'trapDbg' DEBUG
}
export PROMPT_COMMAND=pc
trap 'trapDbg' DEBUG
Then open a new terminal and note this desired behavior on BASH prompt:
$ uname
Darwin
0 $
$
$
$ date
Sun Dec 14 05:59:03 EST 2014
0 $
$
$ [ 1 = 2 ]
1 $
$
$ ls 123
ls: cannot access 123: No such file or directory
2 $
$
Explanation:
This is based on trap 'handler' DEBUG and PROMPT_COMMAND hooks.
PS1 is using a variable _ret i.e. PS1='$_ret$ '.
trap command runs only when a command is executed but PROMPT_COMMAND is run even when an empty enter is pressed.
trap command sets a variable _cmd to the actually executed command using BASH internal var BASH_COMMAND.
PROMPT_COMMAND hook sets _ret to "$? " if _cmd is non-empty otherwise sets _ret to "". Finally it resets _cmd var to empty state.
The variable HISTCMD is updated every time a new command is executed. Unfortunately, the value is masked during the execution of PROMPT_COMMAND (I suppose for reasons related to not having history messed up with things which happen in the prompt command). The workaround I came up with is kind of messy, but it seems to work in my limited testing.
# This only works if the prompt has a prefix
# which is displayed before the status code field.
# Fortunately, in this case, there is one.
# Maybe use a no-op prefix in the worst case (!)
PS1_base=$'\n'
# Functions for PROMPT_COMMAND
PS1_update_HISTCMD () {
# If HISTCONTROL contains "ignoredups" or "ignoreboth", this breaks.
# We should not change it programmatically
# (think principle of least astonishment etc)
# but we can always gripe.
case :$HISTCONTROL: in
*:ignoredups:* | *:ignoreboth:* )
echo "PS1_update_HISTCMD(): HISTCONTROL contains 'ignoredups' or 'ignoreboth'" >&2
echo "PS1_update_HISTCMD(): Warning: Please remove this setting." >&2 ;;
esac
# PS1_HISTCMD needs to contain the old value of PS1_HISTCMD2 (a copy of HISTCMD)
PS1_HISTCMD=${PS1_HISTCMD2:-$PS1_HISTCMD}
# PS1_HISTCMD2 needs to be unset for the next prompt to trigger properly
unset PS1_HISTCMD2
}
PROMPT_COMMAND=PS1_update_HISTCMD
# Finally, the actual prompt:
PS1='${PS1_base#foo${PS1_HISTCMD2:=${HISTCMD%$PS1_HISTCMD}}}${_:+${PS1_HISTCMD2:+$? }}$ '
The logic in the prompt is roughly as follows:
${PS1_base#foo...}
This displays the prefix. The stuff in #... is useful only for its side effects. We want to do some variable manipulation without having the values of the variables display, so we hide them in a string substitution. (This will display odd and possibly spectacular things if the value of PS1_base ever happens to begin with foo followed by the current command history index.)
${PS1_HISTCMD2:=...}
This assigns a value to PS1_HISTCMD2 (if it is unset, which we have made sure it is). The substitution would nominally also expand to the new value, but we have hidden it in a ${var#subst} as explained above.
${HISTCMD%$PS1_HISTCMD}
We assign either the value of HISTCMD (when a new entry in the command history is being made, i.e. we are executing a new command) or an empty string (when the command is empty) to PS1_HISTCMD2. This works by trimming off the value HISTCMD any match on PS1_HISTCMD (using the ${var%subst} suffix replacement syntax).
${_:+...}
This is from the question. It will expand to ... something if the value of $_ is set and nonempty (which it is when a command is being executed, but not e.g. if we are performing a variable assignment). The "something" should be the status code (and a space, for legibility) if PS1_HISTCMD2 is nonempty.
${PS1_HISTCMD2:+$? }
There.
'$ '
This is just the actual prompt suffix, as in the original question.
So the key parts are the variables PS1_HISTCMD which remembers the previous value of HISTCMD, and the variable PS1_HISTCMD2 which captures the value of HISTCMD so it can be accessed from within PROMPT_COMMAND, but needs to be unset in the PROMPT_COMMAND so that the ${PS1_HISTCMD2:=...} assignment will fire again the next time the prompt is displayed.
I fiddled for a bit with trying to hide the output from ${PS1_HISTCMD2:=...} but then realized that there is in fact something we want to display anyhow, so just piggyback on that. You can't have a completely empty PS1_base because the shell apparently notices, and does not even attempt to perform a substitution when there is no value; but perhaps you can come up with a dummy value (a no-op escape sequence, perhaps?) if you have nothing else you want to display. Or maybe this could be refactored to run with a suffix instead; but that is probably going to be trickier still.
In response to Anubhava's "smallest answer" challenge, here is the code without comments or error checking.
PS1_base=$'\n'
PS1_update_HISTCMD () { PS1_HISTCMD=${PS1_HISTCMD2:-$PS1_HISTCMD}; unset PS1_HISTCMD2; }
PROMPT_COMMAND=PS1_update_HISTCMD
PS1='${PS1_base#foo${PS1_HISTCMD2:=${HISTCMD%$PS1_HISTCMD}}}${_:+${PS1_HISTCMD2:+$? }}$ '
This is probably not the best way to do this, but it seems to be working
function pc {
foo=$_
fc -l > /tmp/new
if cmp -s /tmp/{new,old} || test -z "$foo"
then
PS1='\n$ '
else
PS1='\n$? $ '
fi
cp /tmp/{new,old}
}
PROMPT_COMMAND=pc
Result
$ [ 2 = 2 ]
0 $ [ 2 = 3 ]
1 $
$
I need to use great script bash-preexec.sh.
Although I don't like external dependencies, this was the only thing to help me avoid to have 1 in $? after just pressing enter without running any command.
This goes to your ~/.bashrc:
__prompt_command() {
local exit="$?"
PS1='\u#\h: \w \$ '
[ -n "$LASTCMD" -a "$exit" != "0" ] && PS1='['${red}$exit$clear"] $PS1"
}
PROMPT_COMMAND=__prompt_command
[-f ~/.bash-preexec.sh ] && . ~/.bash-preexec.sh
preexec() { LASTCMD="$1"; }
UPDATE: later I was able to find a solution without dependency on .bash-preexec.sh.

Why do I get different bash script results when invoked with 'set -x', and how do I fix it?

I've found that the results of my bash script will change depending upon if I execute it with debugging or not (i.e. invoking set -x). I don't mean that I get more output, but that the result of the program itself differs.
I'm assuming this isn't the desired behavior, and I'm hoping that you can teach me how to correc this.
The bash script below is a contrived example, I tried reducing the logic from the script I'm investigating so that the problem can be easily reproducible and obvious.
#!/bin/bash
# Base function executes command (print working directory) stores the value in
# the destination and returns the status.
function get_cur_dir {
local dest=$1
local result
result=$((pwd) 2>&1)
status=$?
eval $dest="\"$result\""
return $status
}
# 2nd level function uses the base function to execute the command and store
# the result in the desired location. However if the base function fails it
# terminates the script. Yes, I know 'pwd' won't fail -- this is a contrived
# example to illustrate the types of problems I am seeing.
function get_cur_dir_nofail {
local dest=$1
local gcdnf_result
local status
get_cur_dir gcdnf_result
status=$?
if [ $status -ne 0 ]; then
echo "ERROR: Command failure"
exit 1
fi
eval dest="\"$gcdnf_result\""
}
# Cause blarg to be loaded with the current directory, use the results to
# create a flag_file name, and do logic with the flag_file name.
function main {
get_cur_dir blarg
echo "Current diregtory is:$blarg"
local flag_file=/tmp/$blarg.flag
echo -e ">>>>>>>> $flag_file"
if [ "/tmp//root.flag" = "$flag_file" ]; then
echo "Match"
else
echo "No Match"
fi
}
main
.
.
When I execute without the set -x it works as I expect as illustrated below:
Current diregtory is:/root
>>>>>>>> /tmp//root.flag
Match
.
.
However, when I add the debugging output with -x it doesn't work, as illustrated below:
root#psbu-jrr-lnx:# bash -x /tmp/example.sh
+ main
+ get_cur_dir blarg
+ local dest=blarg
+ local result
+ result='++ pwd
/root'
+ status=0
+ eval 'blarg="++ pwd
/root"'
++ blarg='++ pwd
/root'
+ return 0
+ echo 'Current diregtory is:++ pwd
/root'
Current diregtory is:++ pwd
/root
+ local 'flag_file=/tmp/++ pwd
/root.flag'
+ echo -e '>>>>>>>> /tmp/++ pwd
/root.flag'
>>>>>>>> /tmp/++ pwd
/root.flag
+ '[' /tmp//root.flag = '/tmp/++ pwd
/root.flag' ']'
+ echo 'No Match'
No Match
root#psbu-jrr-lnx:#
I think what happens is you capture the debugging logging output produced by the shell when you run it with set -x, this line, for example, does it:
result=$((pwd) 2>&1)
In the above line you shouldn't really need to redirect standard error to standard output, so remove 2>&1.
Changing...
result=$((pwd) 2>&1)
...into...
result=$(pwd 2>&1)
...will allow you to capture the output of pwd without capturing the debug info generated by set -x.
The reason the the $PWD variable exists is to free your script from having to run a separate process or interpret its output (which in this case has been modified by -x). Use $PWD instead.

Error defining variable in bash

I am writing simple housekeeping script. this script contains following line of codes.This is a sample code i have extracted from the actual code since it is a big file.
#!/bin/bash
ARCHIVE_PATH=/product/file
FunctionA(){
ARCHIVE_USER=user1 # archive storage user name (default)
ARCHIVE_GROUP=group1 # archive storage user group (default)
functionB
}
functionB() {
_name_Project="PROJECT1"
_path_Componet1=/product/company/Componet1/Logs/
_path_Component2=/product/company/Componet2/Logs/
##Component1##
archive "$(_name_Project)" "$(_path_Componet1)" "filename1" "file.log"
}
archive(){
_name= $1
_path=$2
_filename=$3
_ignore_filename=$4
_today=`date + '%Y-%m-%d'`
_archive=${ARCHIVE_PATH}/${_name}_$(hostname)_$(_today).tar
if [ -d $_path];then
echo "it is a directory"
fi
}
FunctionA
When i run the above script , i get the following error
#localhost.localdomain[] $ sh testScript.sh
testScript.sh: line 69: _name_Component1: command not found
testScript.sh: line 69: _path_Component2: command not found
date: extra operand `%Y-%m-%d'
Try `date --help' for more information.
testScript.sh: line 86: _today: command not found
it is a directory
Could someone explain me what am i doing wrong here.
I see the line: _today=date + '%Y-%m-%d'
One error I spotted was resolved by removing the space between the + and the ' like so:
_today=date +'%Y-%m-%d'
I don't see where the _name_Component1 and _name_Component2 variables are declared so can't help there :)
Your variable expansions are incorrect -- you're using $() which is for executing a subshell substitution. You want ${}, i.e.:
archive "${_name_Project}" "${_path_Componet1}" "filename1" "file.log"
As for the date error, no space after the +.
a few things... you are using $(variable) when it should be ${variable}
on the date command, make sure there is no space between the + and the format
and you have name= $1, you don't want that space there

Resources