Setting specific variables from a different script - bash

I need to take specific variables from one script and use them in a different script.
Example:
Original script:
VARA=4 # Some description of VARA
VARB=6 # Some description of VARB
SOMEOTHERVAR="Foo"
/call/to/some/program
I want to write a second script that needs VARA and VARB, but not SOMEOTHERVAR or the call to the program.
I can already do:
eval $(grep 'VARA=' origscript.sh)
eval $(grep 'VARB=' origscript.sh)
This seems to work, but when I want to do both, like this, it only sets the first:
eval $(grep 'VAR[AB]=' origscript.sh)
because it seems to concatenate the two lines that grep returns. (Which probably means that the comments save the first assignments.)

Put quotes around it, so that the newlines in the output of grep will not be turned into spaces.
eval "$(grep 'VAR[AB]=' origscript.sh)"

Related

Linux script text substitutions

I want to make a few configuration files (for homeassistant) that are very similar to each other. I am aiming to use a template file as the base and put in a few substitution strings at the top of the file and use a bash script to read the substitutions and run sed with the applicable strings.
i.e.
# substitutions
# room = living_room
# switch = hallway_motion
# delay = 3
automations:
foo......
.........
entity_id: $switch
When I run the script it will look for any line beginning with a # that has a word (key) and then an = and another word (maybe string) (value) and replace anywhere that key with a $ in front is in the rest of the file.
Like what is done by esphome. https://esphome.io/guides/configuration-types.html#substitutions
I am getting stuck at finding the "keys" in the file. How can I script this so it can find all the "keys" recursively?
Or is there something that does this, or something similar, out there already?
You can do this with sed in two stages. The first stage will generate a second stage sed script to fill in your template. I'd make a small adjustment to your syntax and recommend that you require curly braces around your variable name. In other words, write your variable expansions like this:
# foo = bar
myentry: ${foo}
This makes it easier to avoid pitfalls when you have one variable name that's a prefix of another (e.g., foo and foobar).
#!/bin/bash
in="$1"
stage2=$(mktemp)
trap 'rm -f "$stage2"' EXIT
sed -n -e 's,^# \([[:alnum:]_]\+\) = \([^#]\+\),s#\${\1}#\2#g,p' "$in" > "$stage2"
sed -f "$stage2" "$in"
Provide a filename as the first argument, and it will print the filled out template on stdout.
This example code is pretty strict about white space on variable definition lines, but that can obviously be adjusted to your liking.

cat on a quoted variable fails

I have this code snippet:
userjobs=$(grep -rw "$USER" /my/job/dir/|awk '{print $1}'|sort|uniq|rev|cut -c 2-|rev)
for job in "${userjobs[#]}"; do
cat "$job"
done
exit 0
When I run it as is, I get the following output:
cat: /my/job/dir/45
/my/job/dir/46: No such file or directory
However, if I unquote $job, I no longer receive this behavior, and it cats each of the files as expected.
I've done some reading up on globbingand splitting to see if this is occurring, but it seems like double-quoting should prevent that from happening. Can anyone explain why the behavior is different between "$job" and $job?
This happens because your variable looks like:
userjobs='/my/job/dir/45
/my/job/dir/46'
If you expand it as an array, with "${userjobs[#]}", that it acts as an array with exactly one element -- that string. Thus, behavior is identical to:
userjobs=( [0]='/my/job/dir/45
/my/job/dir/46' )
...still exactly one string with a literal newline in it.
Thus, cat "$job" looks for a file with a literal newline in its name.
To load your result into a real array you can iterate over with "${userjobs[#]}" expanding to a distinct element per line, use:
readarray -t userjobs < <(grep ...)
userjobs needs to be an array. Put parentheses around the value when assigning it:
userjobs=($(grep -rw "$USER" /my/job/dir/|awk '{print $1}'|sort|uniq|rev|cut -c 2-|rev))

compare process list before and after running bash

Trying to compare the process list before and after running a bash script of tests. Having trouble, since ps returns 1, and I'm not sure how to compare the before and after when I have them.
Ideally, it would look something like this. Forgive the crude pseudo-code:
run-tests:
ps -x
export before=$?
# run tests and scripts
ps -x
export after=$?
# compare before and after
Suggests and advice appreciated.
I'm assuming you want to count the number of running processes before and after (your question wasn't overly clear on that). If so, you can pipe ps into wc:
export before=`ps --no-headers | wc -l`
-- EDIT ---
I reread the question, and it may be that you're looking for the actual processes that differ. If that's the case, then, you can capture the output in variables and compare those:
target:
# before=$$(ps --no-headers); \
run test; \
after=$$(ps --no-headers); \
echo "differing processes:"; \
comm -3 <(echo "$before") <(echo "$after")
A few quick notes on this: I concatenated all the lines using \'s as you mentioned you used makefiles, and the scope of a variable is only the recipe line in which it's defined. By concatenating the lines, the variables have a scope of the whole recipe.
I used double $$ as your original post suggested a makefile, and a makefile $$ will expand to a single $ in the bash code to be run.
Doing var=$(command) in bash assigns var the output of command
I used the <() convention which is specific to bash. This lets you treat the output of a command as file, without having to actually create a file. Notice that I put quotes around the variables -- this is required, otherwise the bash will ignore newlines when expanding the variable.

Modifying a variable in another shell script

I am trying to modify the variables of one shell script, using another script. This is what I have so far:
script1.sh
#!/bin/bash
var=123.45.67.890
script2.sh
#!/bin/bash
currVar=000.00.00.000
. /./script1.sh
var=$currVar
I understand that I am not modifying Script 1 here, but simply temporarily modifying var. How can I modify this var in script 1, via script 2?
Solution
. /./script1.sh
echo $var | sed "s/$var/$currVar/g" /./script1.sh > "temp.txt" && mv temp.txt /./script1.sh
Just use sed in 2nd script (script2.sh) as
currVar="000.00.00.000"
sed -r -i.bak "s/var=([[:graph:]]+)/var=$currVar/" script1.sh
var=000.00.00.000
where [[:graph:]] is a character class for [[:alnum:]] & [[:punct:]] to match values for var with printable characters/meta-characters.
Since you mentioned it is a proper IP address, use a proper regEx as
sed -r "s/(\b[0-9]{1,3}\.){3}[0-9]{1,3}\b/$currVar/" script1.sh
var=000.00.00.000
(\b[0-9]{1,3}\.){3}[0-9]{1,3} implies match 3 groups consisting of digits from 0-9, which each group could have from 1-3 digits each, preceded by a dot . and the 4th group also the same as the last. Remember the each group I am mentioning represents an IP octet
Normally, variables in scripts have local scope. If you export the variable, you can extend this scope to include any child processes. However, it looks like you might want to use the modified value when script1.sh runs. If that is the case, you can use the new var value as an input to script1.sh when you run it.
if [[ -z "$1" ]];
then
var=$1
else
var=123.45.67.890
This will check if you gave any parameters when you ran script1.sh, and if you did, then it should set your var equal to this value instead of the default IP.

How do I use `sed` to alter a variable in a bash script?

I'm trying to use enscript to print PDFs from Mutt, and hitting character encoding issues. One way around them seems to be to just use sed to replace the problem characters: sed -ir 's/[“”]/"/g' {input}
My test input file is this:
“very dirty”
we’re
I'm hoping to get "very dirty" and we're but instead I'm still getting
â\200\234very dirtyâ\200\235
weâ\200\231re
I found a nice little post on printing to PDFs from Mutt that I used as a starting point. I have a bash script that I point to from my .muttrc with set print_command="$HOME/.mutt/print.sh" -- the script currently reads about like this:
#!/bin/bash
input="$1" pdir="$HOME/Desktop" open_pdf=evince
# Straighten out curly quotes
sed -ir 's/[“”]/"/g' $input
sed -ir "s/[’]/'/g" $input
tmpfile="`mktemp $pdir/mutt_XXXXXXXX.pdf`"
enscript --font=Courier8 $input -2r --word-wrap --fancy-header=mutt -p - 2>/dev/null | ps2pdf - $tmpfile
$open_pdf $tmpfile >/dev/null 2>&1 &
sleep 1
rm $tmpfile
It does a fine job of creating a PDF (and works fine if you give it a file as an argument) but I can't figure out how to fix the curly quotes.
I've tried a bunch of variations on the sed line:
input=sed -r 's/[“”]/"/g' $input
$input=sed -ir "s/[’]/'/g" $input
Per the suggestion at Can I use sed to manipulate a variable in bash? I also tried input=$(sed -r 's/[“”]/"/g' <<< $input) and I get an error: "Syntax error: redirection unexpected"
But none manages to actually change $input -- what is the correct syntax to change $input with sed?
Note: I accepted an answer that resolved the question I asked, but as you can see from the comments there are a couple of other issues here. enscript is taking in a whole file as a variable, not just the text of the file. So trying to tweak the text inside the file is going to take a few extra steps. I'm still learning.
On Editing Variables In General
BashFAQ #21 is a comprehensive reference on performing search-and-replace operations in bash, including within variables, and is thus recommended reading. On this particular case:
Use the shell's native string manipulation instead; this is far higher performance than forking off a subshell, launching an external process inside it, and reading that external process's output. BashFAQ #100 covers this topic in detail, and is well worth reading.
Depending on your version of bash and configured locale, it might be possible to use a bracket expression (ie. [“”], as your original code did). However, the most portable thing is to treat “ and ” separately, which will work even without multi-byte character support available.
input='“hello ’cruel’ world”'
input=${input//'“'/'"'}
input=${input//'”'/'"'}
input=${input//'’'/"'"}
printf '%s\n' "$input"
...correctly outputs:
"hello 'cruel' world"
On Using sed
To provide a literal answer -- you almost had a working sed-based approach in your question.
input=$(sed -r 's/[“”]/"/g' <<<"$input")
...adds the missing syntactic double quotes around the parameter expansion of $input, ensuring that it's treated as a single token regardless of how it might be string-split or glob-expanded.
But All That May Not Help...
The below is mentioned because your test script is manipulating content passed on the command line; if that's not the case in production, you can probably disregard the below.
If your script is invoked as ./yourscript “hello * ’cruel’ * world”, then information about exactly what the user entered is lost before the script is started, and nothing you can do here will fix that.
This is because $1, in that scenario, will only contain “hello; ’cruel’ and world” are in their own argv locations, and the *s will have been replaced with lists of files in the current directory (each such file substituted as a separate argument) before the script was even started. Because the shell responsible for parsing the user's command line (which is not the same shell running your script!) did not recognize the quotes as valid at the time when it ran this parsing, by the time the script is running, there's nothing you can do to recover the original data.
Abstract: The way to use sed to change a variable is explored, but what you really need is a way to use and edit a file. It is covered ahead.
Sed
The (two) sed line(s) could be solved with this (note that -i is not used, it is not a file but a value):
input='“very dirty”
we’re'
sed 's/[“”]/\"/g;s/’/'\''/g' <<<"$input"
But it should be faster (for small strings) to use the internals of the shell:
input='“very dirty”
we’re'
input=${input//[“”]/\"}
input=${input//[’]/\'}
printf '%s\n' "$input"
$1
But there is an underlying problem with your script, you are trying to clean an input received from the command line. You are using $1 as the source of the string. Once somebody writes:
./script “very dirty”
we’re
That input is lost. It is broken into shell's tokens and "$1" will be “very only.
But I do not believe that is what you really have.
file
However, you are also saying that the input comes from a file. If that is the case, then read it in with:
input="$(<infile)" # not $1
sed 's/[“”]/\"/g;s/’/'\''/g' <<<"$input"
Or, if you don't mind to edit (change) the file, do this instead:
sed -i 's/[“”]/\"/g;s/’/'\''/g' infile
input="$(<infile)"
Or, if you are clear and certain that what is being given to the script is a filename, like:
./script infile
You can use:
infile="$1"
sed -i 's/[“”]/\"/g;s/’/'\''/g' "$infile"
input="$(<"$infile")"
Other comments:
Then:
Quote your variables.
Do not use the very old `…` syntax, use $(…) instead.
Do not use variables in UPPER case, those are reserved for environment variables.
And (unless you actually meant sh) use a shebang (first line) that targets bash.
The command enscript most definitively requires a file, not a variable.
Maybe you should use evince to open the PS file, there is no need of the step to make a pdf, unless you know you really need it.
I believe that is better use a file to store the output of enscript and ps2pdf.
Do not hide the errors printed by the commands until everything is working as desired, then, just call the script as:
./script infile 2>/dev/null
Or as required to make it less verbose.
Final script.
If you call the script with the name of the file that enscript is going to use, something like:
./script infile
Then, the whole script will look like this (runs both in bash or sh):
#!/usr/bin/env bash
Usage(){ echo "$0; This script require a source file"; exit 1; }
[ $# -lt 1 ] && Usage
[ ! -e $1 ] && Usage
infile="$1"
pdir="$HOME/Desktop"
open_pdf=evince
# Straighten out curly quotes
sed -i 's/[“”]/\"/g;s/’/'\''/g' "$infile"
tmpfile="$(mktemp "$pdir"/mutt_XXXXXXXX.pdf)"
outfile="${tmpfile%.*}.ps"
enscript --font=Courier10 "$infile" -2r \
--word-wrap --fancy-header=mutt -p "$outfile"
ps2pdf "$outfile" "$tmpfile"
"$open_pdf" "$tmpfile" >/dev/null 2>&1 &
sleep 5
rm "$tmpfile" "$outfile"

Resources