Execute awk output exactly inside of executed script - bash

There is some similar topics, but this is slightly different.
I have database with names of scripts and parameters a. When I execute:
sqlite3 log/log.db "select name, a from result" | awk -F '|' '{printf("a[%s]=%s;\n",$1,$2);}'
I see:
a[inc.bash]=4.23198234894777e-06;
a[inc.c]=3.53343440279423e-10;
In my bash script I would like to use an associative array.
When I execute this code (coding by hand value of a[inc.bash]):
declare -A a
a[inc.bash]=4.23198234894777e-06;
echo ${a[inc.bash]}
It works correctly and print
4.23198234894777e-06
But I do not know, how to use output of first presented command with awk to assign values of key of associative array a declared in my script.
I want to execute code that is printed by awk inside of my script, but when I use something like $() or ``, it prints a error like this:
code:
declare -A a
$(sqlite3 log/log.db "select name, a from result" | awk -F '|' '{printf("a[%s]=%s;\n",$1,$2);}')
echo ${a[inc.bash]}
output:
a[inc.bash]=4.23198234894777e-06; not found command

To tell Bash to interpret your output as commands, you can use process substitution and the source command:
declare -A a
source <(sqlite3 log/log.db "select name, a from result" |
awk -F '|' '{printf("a[%s]=%s;\n",$1,$2);}')
echo ${a[inc.bash]}
The <() construct (process substitution) can be treated like a file, and source (or the equivalent .) runs the commands in its argument without creating a subshell, making the resulting a array accessible in the current shell.
A simplified example to demonstrate, as I don't have your database:
$ declare -A a
$ source <(echo 'a[inc.bash]=value')
$ echo "${a[inc.bash]}"
value
This all being said, this is about as dangerous as using eval: whatever the output of your sqlite/awk script, it will be executed!

Related

Appending command line arguments to a Bash array

I am trying to write a Bash script that appends a string to a Bash array, where the string contains the path to a Python script together with the arguments passed into the Bash script, enclosed in double quotes.
If I call the script using ./script.sh -o "a b", I would like a CMD_COUNT of 1, but I am getting 2 instead.
script.sh:
#!/bin/bash
declare -a COMMANDS=()
COMMANDS+=("/path/to/myscript.py \"${#}\"")
CMD_COUNT=${#COMMANDS[*]}
echo $CMD_COUNT
How can I ensure that the appended string is /path/to/myscript.py "-o" "a b"?
EDIT: The full script is actually like this:
script.sh:
#!/bin/bash
declare -a COMMANDS=()
COMMANDS+=("/path/to/myscript2.py")
COMMANDS+=("/path/to/myscript.py \"${#}\"")
CMD_COUNT=${#COMMANDS[*]}
echo $CMD_COUNT
for i in ${!COMMANDS[*]}
do
echo "${0} - command: ${COMMANDS[${i}]}"
${COMMANDS[${i}]}
done
It's a bad idea, but if it's what you really want, printf %q can be used to generate a string that, when parsed by the shell, will result in a given list of arguments. (The exact escaping might not be identical to what you'd write by hand, but the effect of evaluating it -- using eval -- will be).
#!/bin/bash
declare -a COMMANDS=( )
printf -v command '%q ' "/path/to/myscript" "$#"
COMMANDS+=( "$command" )
CMD_COUNT=${#COMMANDS[#]}
echo "$CMD_COUNT"
...but, as I said, this is all a bad idea.
Best-practice ways to encapsulate code as data in bash involve using functions, or arrays with one element per argument.
eval results in code that's prone to security bugs.

Create file from shell script (E.g, this-is-the-title)

I am trying to create a file using the following script (see below). While the script runs without errors (at least according to shellcheck), I cannot get the resulting file to have the correct name.
#!/bin/bash
# Set some variables
export site_path=~/Documents/Blog
drafts_path=~/Documents/Blog/_drafts
title="$title"
# Create the filename
title=$("$title" | "awk {print tolower($0)}")
filename="$title.markdown"
file_path="$drafts_path/$filename"
echo "File path: $file_path"
# Create the file, Add metadata fields
cat >"$file_path" <<EOL
---
title: \"$title\"
layout:
tags:
---
EOL
# Open the file in BBEdit
bbedit "$file_path"
exit 0
Very new to bash, so I'm not quite sure what I'm doing wrong...
The most glaring error is this:
title=$("$title" | "awk {print tolower($0)}")
It's wrong for several reasons:
This pipeline runs "$title" as a command -- meaning that it looks for a command named with the title of your blog post to run -- and pipes the output of that command (a command that presumably won't exist) to awk.
Using double-quotes around the entire awk command means you're looking for a command named something like /usr/bin/awk {print tolower(bash-)} (if $0 evaluates to bash-, which it will in an interactive interpreter; behavior will differ elsewhere).
Using double-quotes rather than single-quotes to protect your awk script means that the $0 gets evaluated to the shell rather than by awk.
A better alternative might look like:
title=$(awk '{print tolower($0)}' <<<"$title")
...or, to use simpler tools:
title=$(tr '[:upper:]' '[:lower:]' <<<"$title")
...or, to use bash 4.x built-in functionality:
title=${title,,}
Of course, all that assumes that title is set to start with. If you aren't passing it through your environment, you might want something like title=$1 rather than title="$title" earlier in your script.

How to pass a shell script argument as a variable to be used when executing grep command

I have a file called fruit.txt which contains a list of fruit names (apple, banana.orange,kiwi etc). I want to create a script that allows me to pass an argument when calling the script i.e. script.sh orange which will then search the file fruit.txt for the variable (orange) using grep. I have the following script...
script name and argument as follows:
script.sh orange
script snippet as follows:
#!/bin/bash
nameFind=$1
echo `cat` "fruit.txt"|`grep` | $nameFind
But I get the grep info usage command and it seems that the script is awaiting some additional command etc. Advice greatly appreciated.
The piping syntax is incorrect there. You are piping the output of grep as input to the variable named nameFind. So when the grep command tries to execute it is only getting the contents of fruit.txt. Do this instead:
#!/bin/bash
nameFind=$1
grep "$nameFind" fruit.txt
Something like this should work:
#!/bin/bash
name="$1"
grep "$name" fruit.txt
There's no need to use cat and grep together; you can simply pass the name of the file as the third argument, after the pattern to be matched. If you want to match fixed strings (i.e. no regular expressions), you can also use the -F modifier:
grep -F "$name" fruit.txt

Passing a variable into awk within a shell script

I have a shell script that I'm writing to search for a process by name and return output if that process is over a given value.
I'm working on finding the named process first. The script currently looks like this:
#!/bin/bash
findProcessName=$1
findCpuMax=$2
#echo "parameter 1: $findProcessName, parameter2: $findCpuMax"
tempFile=`mktemp /tmp/processsearch.XXXXXX`
#echo "tempDir: $tempFile"
processSnapshot=`ps aux > $tempFile`
findProcess=`awk -v pname="$findProcessName" '/pname/' $tempFile`
echo "process line: "$findProcess
`rm $tempFile`
The error is occuring when I try to pass the variable into the awk command. I checked my version of awk and it definitely does support the -v flag.
If I replace the '/pname/' portion of the findProcess variable assignment the script works.
I checked my syntax and it looks right. Could anyone point out where I'm going wrong?
The processSnapshot will always be empty: the ps output is going to the file
when you pass the pattern as a variable, use the pattern match operator:
findProcess=$( awk -v pname="$findProcessName" '$0 ~ pname' $tempFile )
only use backticks when you need the output of a command. This
`rm $tempFile`
executes the rm command, returns the output back to the shell and, it the output is non-empty, the shell attempts to execute that output as a command.
$ `echo foo`
bash: foo: command not found
$ `echo whoami`
jackman
Remove the backticks.
Of course, you don't need the temp file at all:
pgrep -fl $findProcessName

How can one store a variable in a file using bash?

I can redirect the output and then cat the file and grep/awk the variable, but I would like to use this file for multiple variables.
So If it was one variable say STATUS then i could do some thing like
echo "STATUS $STATUS" >> variable.file
#later perhaps in a remote shell where varible.file was copied
NEW_VAR=`cat variable.file | awk print '{$2}'`
I guess some inline editing with sed would help. The smaller the code the better.
One common way of storing variables in a file is to just store NAME=value lines in the file, and then just source that in to the shell you want to pick up the variables.
echo 'STATUS="'"$STATUS"'"' >> variable.file
# later
. variable.file
In Bash, you can also use source instead of ., though this may not be portable to other shells. Note carefully the exact sequence of quotes necessary to get the correct double quotes printed out in the file.
If you want to put multiple variables at once into the file, you could do the following. Apologies for the quoting contortions that this takes to do properly and portably; if you restrict yourself to Bash, you can use $"" to make the quoting a little simpler:
for var in STATUS FOO BAR
do
echo "$var="'"'"$(eval echo '$'"$var")"'"'
done >> variable.file
The declare builtin is useful here
for var in STATUS FOO BAR; do
declare -p $var | cut -d ' ' -f 3- >> filename
done
As Brian says, later you can source filename
declare is great because it handles quoting for you:
$ FOO='"I'"'"'m here," she said.'
$ declare -p FOO
declare -- FOO="\"I'm here,\" she said."
$ declare -p FOO | cut -d " " -f 3-
FOO="\"I'm here,\" she said."

Resources