bash script not running properly - bash

When I run this by its self in the command line it seems to work fine, but when I have another script execute this, it doesn't work. Any ideas? I'm guessing it has to do with quotes, but not sure.
#!/bin/sh
#Required csvquote from https://github.com/dbro/csvquote
#TODO: Clean CSV File using CSVFix
#Version 3
echo "File Name: $1"
function quit {
echo "Quitting Script"
exit 1
}
function fileExists {
if [ ! -f "$1" ]
then
echo "File $1 does not exists"
quit
fi
}
function getInfo {
#Returns website url like: "http://www.website.com/info"
#Reads last line of a csv file, and gets the 2nd item.
RETURN=$(tail -n 1 $1 | csvquote | cut -d ',' -f 2 | csvquote -u)
echo $RETURN
}
function work {
CURLURL="http://127.0.0.1:9200/cj/_query"
URL=$(getInfo)
echo "URL: $URL"
CURLDATA='{ "query" : { "match" : { "PROGRAMURL" : '$URL' } } }'
#URL shows up as blank...???
echo "Curl Data: $CURLDATA"
RESPONSE=$(curl -XDELETE "$CURLURL" -d "$CURLDATA" -vn)
echo $RESPONSE
echo "Sleeping Allowing Time To Delete"
sleep 5s
}
fileExists $1
work $1

I cant see why a simpler version wont work: functions are useful, but I think there are too many, overcomplicating things, if what you are posting is the entirety of your script (in my opinion)
Your script is doing things using a broken lucky pattern: $1 variables are also arguments to shell functions as well as the main script. Think of them as local variables to a function. So when you are calling $(getInfo) it is calling that function with no argument, so actually runs tail -n 1 which falls back to stdin, which you are specifying to work as < $1. You could see this for yourself by putting echo getInfo_arg_1="$1" >&2 inside the function...
Note also you are not quoting $1 anywhere, this script is not whitespace in file safe, although this is only more likely to be a problem if you are having to deal with files sent to you from a Windows computer.
In the absence of other information, the following 'should' work:
#!/bin/bash
test -z "$1" && { echo "Please specify a file." ; exit 1; }
test -f "$1" || { echo "Cant see file '$1'." ; exit 1; }
FILE="$1"
function getInfo() {
#Returns website url like: "http://www.website.com/info"
#Reads last line of a csv file, and gets the 2nd item.
tail -n 1 "$1" | csvquote | cut -d ',' -f 2 | csvquote -u
}
CURLURL="http://127.0.0.1:9200/cj/_query"
URL=$(getInfo "$FILE")
echo "URL: $URL"
CURLDATA='{ "query" : { "match" : { "PROGRAMURL" : '$URL' } } }'
curl -XDELETE "$CURLURL" -d "$CURLDATA" -vn
echo "Sleeping Allowing Time To Delete"
sleep 5s
If it still fails you really need to post your error messages.
One other thing, especially if you are calling this from another script, chmod +x the script so you can run it without having to invoke it with bash directly. If you want to turn on debugging then put set -x near the start somewhere.

Related

Get last executed command in bash

I need to know what was the last command executed while setting my bash prompt in the function corresponding to PROMPT_COMMAND. I have code as follows
function bash_prompt_command () {
...
local last_cmd="$(history | tail -n 2 | head -n 1 | tr -s ' ' | cut -d ' ' -f3-)"
[[ ${last_cmd} =~ .*git\s+checkout.* ]] && ( ... )
...
}
Is there is faster(bash built-in way) to know the what was the command which invoked PROMPT_COMMAND.
I tried using BASH_COMMAND, but that too does not return the command which actually invoked PROMPT_COMMAND.
General case: Collecting all commands
You can use a DEBUG trap to store each command before it's run.
store_command() {
declare -g last_command current_command
last_command=$current_command
current_command=$BASH_COMMAND
return 0
}
trap store_command DEBUG
...and thereafter you can check "$last_command"
Special case: Only trying to shadow one (sub)command
If you only want to change how one command operates, you can just shadow that one command. For git checkout:
git() {
# if $1 is not checkout, just run real git and pretend we weren't here
[[ $1 = checkout ]] || { command git "$#"; return; }
# if $1 _is_ checkout, run real git and do our own thing
local rc=0
command git "$#" || rc=$?
ran_checkout=1 # ...put the extra code you want to run here...
return "$rc"
}
...potentially used from something like:
bash_prompt_command() {
if (( ran_checkout )); then
ran_checkout=0
: "do special thing here"
else
: "do other thing here"
fi
}

Unable to execute awk command in a function, but working directly in the shell

I want to create a utility function for bash to remove duplicate lines. I am using function
function remove_empty_lines() {
if ! command -v awk &> /dev/null
then
echo '[x] ERR: "awk" command not found'
return
fi
if [[ -z "$1" ]]
then
echo "usage: remove_empty_lines <file-name> [--replace]"
echo
echo "Arguments:"
echo -e "\t--replace\t (Optional) If not passed, the result will be redirected to stdout"
return
fi
if [[ ! -f "$1" ]]
then
echo "[x] ERR: \"$1\" file not found"
return
fi
echo $0
local CMD="awk '!seen[$0]++' $1"
if [[ "$2" = '--reload' ]]
then
CMD+=" > $1"
fi
echo $CMD
}
If I am running the main awk command directly, it is working. But when i execute the same $CMD in the function, I am getting this error
$ remove_empty_lines app.js
/bin/bash
awk '!x[/bin/bash]++' app.js
The original code is broken in several ways:
When used with --reload, it would truncate the output file's contents before awk could ever read those contents (see How can I use a file in a command and redirect output to the same file without truncating it?)
It didn't ever actually run the command, and for the reasons described in BashFAQ #50, storing a shell command in a string is inherently buggy (one can work around some of those issues with eval; BashFAQ #48 describes why doing so introduces security bugs).
It wrote error messages (and other "diagnostic content") to stdout instead of stderr; this means that if your function's output was redirected to a file, you could never see its errors -- they'd end up jumbled into the output.
Error cases were handled with a return even in cases where $? would be zero; this means that return itself would return a zero/successful/truthy status, not revealing to the caller that any error had taken place.
Presumably the reason you were storing your output in CMD was to be able to perform a redirection conditionally, but that can be done other ways: Below, we always create a file descriptor out_fd, but point it to either stdout (when called without --reload), or to a temporary file (if called with --reload); if-and-only-if awk succeeds, we then move the temporary file over the output file, thus replacing it as an atomic operation.
remove_empty_lines() {
local out_fd rc=0 tempfile=
command -v awk &>/dev/null || { echo '[x] ERR: "awk" command not found' >&2; return 1; }
if [[ -z "$1" ]]; then
printf '%b\n' >&2 \
'usage: remove_empty_lines <file-name> [--replace]' \
'' \
'Arguments:' \
'\t--replace\t(Optional) If not passed, the result will be redirected to stdout'
return 1
fi
[[ -f "$1" ]] || { echo "[x] ERR: \"$1\" file not found" >&2; return 1; }
if [ "$2" = --reload ]; then
tempfile=$(mktemp -t "$1.XXXXXX") || return
exec {out_fd}>"$tempfile" || { rc=$?; rm -f "$tempfile"; return "$rc"; }
else
exec {out_fd}>&1
fi
awk '!seen[$0]++' <"$1" >&$out_fd || { rc=$?; rm -f "$tempfile"; return "$rc"; }
exec {out_fd}>&- # close our file descriptor
if [[ $tempfile ]]; then
mv -- "$tempfile" "$1" || return
fi
}
First off the output from your function call is not an error but rather the output of two echo commands (echo $0 and echo $CMD).
And as Charles Duffy has pointed out ... at no point is the function actually running the $CMD.
As for the inclusion of /bin/bash in your function's echo output ... the main problem is the reference to $0; by definition $0 is the name of the running process, which in the case of a function is the shell under which the function is being called. Consider the following when run from a bash command prompt:
$ echo $0
-bash
As you can see from your output this generates /bin/bash in your environment. See this and this for more details.
On a related note, the reference to $0 within double quotes causes the $0 to be evaluated, so this:
local CMD="awk '!seen[$0]++' $1"
becomes
local CMD="awk '!seen[/bin/bash]++' app.js"
I'm thinking what you want is something like:
echo $1 # the name of the file to be processed
local CMD="awk '!seen[\$0]++' $1" # escape the '$' in '$0'
becomes
local CMD="awk '!seen[$0]++' app.js"
That should fix the issues shown in your function's output; as for the other issues ... you're getting a good bit of feedback in the various comments ...

Hide echo keep variable bash

I would like to ask you how can I hide echo from my function but still use its echo as a variable for next processing.
My code is:
function str_str {
local str
str="${1#*${2}}"
str="${str%%$3*}"
echo -n "$str"
}
mystr=$(cat /etc/logrotate.conf)
str_str "$mystr" "access.log" "}"
OKACCESS=$(str_str "$mystr" "access.log" "}" | grep -e "daily" -e "size" -e "rotate" -e "create" -e "weekly" -c)
echo $OKACCESS
When I remove the:
echo -n "$str"
I can not use it as a variable for OKACCESS, which returns 0, should return 4 (works with echo not hidden or passed to dev>null).
How do I hide output of the function without limiting the definition of OKACCESS variable?
Thank you for help.
E:
What I am trying to do:
When I execute my current script, its output is:
[root#env test]# ./tester.sh
{
missingok
daily
rotate 10
create 0664 jboss jboss
postrotate
/usr/bin/kill -s SIGHUP `cat /var/run/syslogd.pid`
endscript
dateext
4
When I remove the "echo -n "$str"" part, I get this:
0
and I need this:
4
This will hide the output of the first function call:
str_str "$mystr" "access.log" "}" > /dev/null

How do I conditionally redirect output to a file, depending on whether some script argument exists?

I have a script which I would like to use to process some data and input it into a chosen file.
(do some stuff)>$(hostname)_$1
This works fine if my argument is a file name, but what should I pass (or how should I change the script) if I want to output to the terminal, i.e. stdout?
This answer assumes that the question is:
How do I conditionally redirect output to a file, depending on whether some script argument exists?
In other words, how to reduce duplication in a bash function like this:
do_it() {
if [[ -z $1 ]]; then
some very
complicated
commands
else
{ some very
complicated
commands
} > "$(logdir)/$1"
fi
}
If that were the question, a simple solution is to conditionally redirect stdout inside a subshell:
do_it() {
(
if [[ -n $1 ]]; then exec > "$(logdir)/$1"; fi
some very
complicated
commands
)
}
The subshell (created with the parentheses) is necessary in order to limit the effect of the exec command to the scope of the function; otherwise, the redirect would continue when the function returned.
This works well for me. If DUMP_FILE is empty things go to stdout otherwise to the file. It does the job without using explicit redirection, but just uses pipes and existing applications.
function stdout_or_file
{
local DUMP_FILE=${1:-}
if [ -z "${DUMP_FILE}" ]; then
cat
else
sed -n "w ${DUMP_FILE}"
fi
}
function foo()
{
local MSG=$1
echo "info: ${MSG}"
}
foo "bar" | stdout_or_file ${DUMP_FILE}
Of course, you can squeeze this also in one line
foo "bar" | if [ -z "${DUMP_FILE}" ]; then cat; else sed -n "w ${DUMP_FILE}"; fi
Besides sed -n "w ${DUMP_FILE}" another command that does the same is dd status=none of=${DUMP_FILE}

How to exit from a method in shell script

I am new to shell scripting and stuck with a problem. In my shell method if I saw any validation issue then rest of the programm will not execute and will show user a message. Till validation it's done but when I used exit 0 then only it comes out of the validation loop not from full method.
config_wuigm_parameters () {
echo "Starting to config parameters for WUIGM....." | tee -a $log
prepare_wuigm_conf_file
echo "Configing WUIGM parameters....." | tee -a $log
local parafile=`dirname $0`/wuigm.conf
local pname=""
local pvalue=""
create_preference_template
cat ${parafile} |while read -r line;do
pname=`echo $line | egrep -e "^([^#]*)=(.*)" | cut -d '=' -f 1`
if [ -n "$pname" ] ; then
lsearch=`echo $line | grep "[<|>|\"]" `
if [ -n "$lsearch" ] ; then
echo validtion=$lsearch
echo "< or > character present , Replace < with < and > with >"
exit 1;
else
pvalue=`echo $line | egrep -e "^([^#]*)=(.*)" | cut -d '=' -f 2- `
echo "<entry key=\"$pname\" value=\"$pvalue\"/>" >> $prefs
echo "Configured : ${pname} = ${pvalue} " | tee -a $log
fi
fi
done
echo $validtion
echo "</map>" >> $prefs
# Copy the file to the original location
cp -f $prefs /root/.java/.userPrefs/com/ericsson/pgm/xwx
# removing the local temp file
rm -f $prefs
reboot_server
}
Any help would be great
It is because the construction
cat file | while read ...
starts a new (sub)shell.
In the next you can see the difference:
echoline() {
cat "$1" | while read -r line
do
echo ==$line==
exit 1
done
echo "Still here after the exit"
}
echoline $#
and compare with this
echoline() {
while read -r line
do
echo ==$line==
exit 1
done < "$1"
echo "This is not printed after the exit"
}
echoline $#
Using the return doesn't helps too, (because of subshell). The
echoline() {
cat "$1" | while read -r line
do
echo ==$line==
return 1
done
echo "Still here"
}
echoline $#
will still prints the "Still here".
So, if you want exit the script, use the
while read ...
do
...
done < input #this not starts a new subshell
if want exit just the method (return from it) must check the exit startus of the previous command, like:
echoline() {
cat "$1" | while read -r line
do
echo ==$line==
exit 1
done || return 1
echo "In case of exit (or return), this is not printed"
}
echoline $#
echo "After the function call"
Instead of || or you can use the
[ $? != 0 ] && return 1
just after the while.
You use the return instruction to exit a function with a value.
return [n]
Causes a function to exit with the return value specified by n. If n is omitted, the return status is that of the last command executed in the function body. If used outside a function, but during execution of a script by the . (source) command, it causes the shell to stop executing that script and return either n or the exit status of the last command executed within the script as the exit status of the script. If used out‐side a function and not during execution of a script by ., the return status is false. Any command associated with the RETURN trap is executed before execution resumes after the function or script.
If you want to exit a loop, use the break instruction instead:
break [n]
Exit from within a for, while, until, or select loop. If n is specified, break n levels. n must be ≥ 1. If n is greater than the number of enclosing loops, all enclosing loops are exited. The return value is 0 unless n is not greater than or equal to 1.
The exit instruction exits the current shell instead, so the current program as a whole. If you use sub-shells, code written between parenthesis, then only that sub-shell exits.

Resources