I'm storing commands in a file to be read and run line by line by a POSIX shell program. It looks something like this:
curl -fLo $HOME/.antigen.zsh git.io/antigen
curl -fLo $HOME/.vim/autoload/plug.vim --create-dirs https://raw.githubusercontent.com/junegunn/vim-plug/master/plug.vim
vim +"so $HOME/.vimrc" +PlugInstall +qa!
I'm also using this small function body to go through it and run every line:
while read -r line; do
$line
done < file
Simple stuff. And it works! However, I am having trouble expanding $HOME to my home directory (and ~ for that matter). I've tried using an exec subshell and removing the -r from the read loop but the curl statements create a '$HOME' directory, which is not what I want to do, I want the commands to target my /home/.\+/ directory.
Since this is a strange question and you'll probably be wondering at this point (I certainly would), this is not an XY problem. I have spent a considerable time designing this piece of software and am certain that I need to store these commands in a file for my program to work and I won't consider doing otherwise unless this is proven absolutely impossible. Also, I'm not expanding $HOME myself because I want the commands to work in other users' computers.
Any ideas? Thanks in advance!
Transferring comments into an answer.
Can you use:
sh -c "$line"
Or:
eval $line
Usually eval is regarded as dangerous, but I'm not sure that sh -c is much different. Come to think of it, why not simply execute the file storing the commands?
sh "$file"
You can use sh -e "$file" to stop on an unchecked error, and add -x to see what is being executed.
Related
My Goal
I'm writing a small Bash script, which uses entr, which is a utility to re-run arbitrary commands when it detects file-system events. My immediate goal is to pass entr a command which converts a given markdown file to HTML. entr will run this command every time the markdown file changes. A simplified but working script looks like:
# script 1
in="$1"
out="${in%.md}.html"
echo "$in" | entr pandoc "${in}" -o "${out}"
This works fine. The filename to be watched is supplied to entr on stdin. On detecting changes in that file, entr runs the command specified by its args. In this example that is pandoc, and all the args after it, to convert the markdown file to an HTML file.
For future reference, set -x shows that entr was invoked as we'd expect. (Throughout, lines starting with + show the output from set -x):
+ entr pandoc 'READ ME.md' -o 'READ ME.html'
The problem
I want to look-up the command given to entr depending on the file-type of the
given input file. So the file-conversion command ends up in a variable, and I want to use that variable as the command-line args to entr. But I can't get the quoting right.
Again, simplified:
# script 2
in="$1"
out="${in%.md}.html"
cmd="pandoc \"${in}\" -o \"${out}\""
echo "$in" | entr "$cmd"
(shellcheck.net detects no issues on the above)
This fails. Because "$cmd" in the final line is in quotes, the entirety of $cmd
is treated as a single arg to entr:
+ entr 'pandoc "READ ME.md" -o "READ ME.html"'
entr tries to interpret the whole thing as the name of an executable, which
it cannot find:
entr: exec pandoc "READ ME.md" -o "READ ME.html": No such file or directory
So how should I modify script 2, to use the content of $cmd as the args to
entr?
What have I tried?
Check that $cmd is being formed as I expect? If I echo "$cmd" right after
it is defined in script 2, it looks exactly how I'd hope:
pandoc "READ ME.md" -o "READ ME.html"
I tried messing around with alternate ways of constructing cmd, such as:
cmd='pandoc "'"${in}"'" -o "'"${out}"'"'
but variations like this produce identical values of $cmd, and identical
behavior as script2.
Try not quoting the use of $cmd?
Since the final line of script 2 erroneously treats the whole of "$cmd"
as a single arg, and we want it to split up the words into seprate args
instead, maybe removing the quotes and using a bare $cmd is a step in the
right direction?
echo "$in" | entr $cmd
Predictably enough though, this splits $cmd up on every space, even the
ones inside our double-quotes:
+ entr pandoc '"READ' 'ME.md"' -o '"READ' 'ME.html"'
This makes Pandoc try, and fail, to open a file called "READ:
pandoc: "READ: openBinaryFile: does not exist (No such file or directory)
Try constructing $cmd using printf?
I notice printf -v can store output in a variable. How about using that
instead of assiging to cmd?
printf -v cmd 'pandoc "%s" -o "%s"' "$in" "$out"
Predictably enough, this produces the same results as script2. I tried some
speculative variations, such as %q in the format string, or using $in
and $out directly in the format string, but didn't stumble on anything
that seemed to help.
Try using the ${var#Q} form of parameter expansion.
echo "$in" | entr ${cmd#Q}
Tried with and without double quotes around the use of ${cmd#q}. No joy,
I guess I'm misunderstanding what #Q is for.
+ entr ''\''pandoc' '"READ' 'ME.md"' -o '"READ' 'ME.html"'\'''
entr: exec 'pandoc: No such file or directory
Details
I'm using Bash v5.1.16, in Pop!_OS 22.04, derived from Ubuntu 22.04 (Jammy).
The current 'apt' version of entr (v5.1) in Ubuntu Jammy (22.04) is too old
for my needs (e.g. the -z flag doesn't work.) so I'm compiling my own from
the latest v5.3 source release.
I know there are a lot of questions about quoting in Bash, but I don't see any that seem to match this. Apologies if I'm wrong.
Assemble the command as an array, instead of a string.
I read somewhere that maybe $# might do what I need, so I put the parts of $cmd into an array:
in="$1"
out="${in%.md}.html"
cmd=(pandoc "$in" -o "$out")
echo "$in" | entr "${cmd[#]}"
This correctly quotes the items in ${cmd[#]} which require it (e.g. have spaces in.)
+ entr pandoc 'READ ME.md' -o 'READ ME.html'
So ‘entr’ successfully calls ‘pandoc’, which successfully converts the documents. It works! I confess I did not expect that.
This approach seems viable for other similar situations, not just when invoking entr.
So I have a solution. It doesn't seem completely ideal for my future plans. I had visions of these 'file conversion commands' being configurable, and hence defined in a text file somewhere, so that users (==me, probably) could override them and define their own, and I'm not fluent enough with Bash to be sure how to go about that when commands are defined as arrays instead of strings.
I can't help but feel I've overlooked something simpler.
Use a shell to interpret the value of "$cmd":
echo "$in" | entr sh -c "$cmd"
This approach seems viable for other similar situations, not just when invoking entr.
Similarly, entr has a -s option which invokes a shell for you (chosen using the first word in $SHELL):
echo "$in" | entr -s "$cmd"
These both work well, at the minor cost of spawning an extra shell process.
I'm facing some problems to pass some environment parameters to docker run in a relatively generic way.
Our first iteration was to load a .env file into the environment via these lines:
set -o allexport;
. "${PROJECT_DIR}/.env";
set +o allexport;
And then manually typing the --env VARNAME=$VARNAME as options for the docker run command. But this can be quite annoying when you have dozens of variables.
Then we tried to just pass the file, with --env-file .env, and it seems to work, but it doesn't, because it does not play well with quotes around the variable values.
Here is where I started doing crazy/ugly things. The basic idea was to do something like:
set_docker_parameters()
{
grep -v '^$' "${PROJECT_DIR}/.env" | while IFS= read -r LINE; do
printf " -e %s" "${LINE}"
done
}
docker run $(set_docker_parameters) --rm image:label command
Where the parsed lines are like VARIABLE="value", VARIABLE='value', or VARIABLE=value. Blank lines are discarded by the piped grep.
But docker run complains all the time about not being called properly. When I expand the result of set_docker_parameters I get what I expected, and when I copy its result and replace $(set_docker_parameters), then docker run works as expected too, flawless.
Any idea on what I'm doing wrong here?
Thank you very much!
P.S.: I'm trying to make my script 100% POSIX-compatible, so I'll prefer any solution that does not rely on Bash-specific features.
Based on the comments of #jordanm I devised the following solution:
docker_run_wrapper()
{
# That's not ideal, but in any case it's not directly related to the question.
cmd=$1
set --; # Unset all positional arguments ($# will be emptied)
# We don't have arrays (we want to be POSIX compatible), so we'll
# use $# as a sort of substitute, appending new values to it.
grep -v '^$' "${PROJECT_DIR}/.env" | while IFS= read -r LINE; do
set -- "$#" "--env";
set -- "$#" "${LINE}";
done
# We use $# in a clearly non-standard way, just to expand the values
# coming from the .env file.
docker run "$#" "image:label" /bin/sh -c "${cmd}";
}
Then again, this is not the code I wrote for my particular use case, but a simplification that shows the basic idea. If you can rely on having Bash, then it could be much cleaner, by not overloading $# and using arrays.
I'm trying to use enscript to print PDFs from Mutt, and hitting character encoding issues. One way around them seems to be to just use sed to replace the problem characters: sed -ir 's/[“”]/"/g' {input}
My test input file is this:
“very dirty”
we’re
I'm hoping to get "very dirty" and we're but instead I'm still getting
â\200\234very dirtyâ\200\235
weâ\200\231re
I found a nice little post on printing to PDFs from Mutt that I used as a starting point. I have a bash script that I point to from my .muttrc with set print_command="$HOME/.mutt/print.sh" -- the script currently reads about like this:
#!/bin/bash
input="$1" pdir="$HOME/Desktop" open_pdf=evince
# Straighten out curly quotes
sed -ir 's/[“”]/"/g' $input
sed -ir "s/[’]/'/g" $input
tmpfile="`mktemp $pdir/mutt_XXXXXXXX.pdf`"
enscript --font=Courier8 $input -2r --word-wrap --fancy-header=mutt -p - 2>/dev/null | ps2pdf - $tmpfile
$open_pdf $tmpfile >/dev/null 2>&1 &
sleep 1
rm $tmpfile
It does a fine job of creating a PDF (and works fine if you give it a file as an argument) but I can't figure out how to fix the curly quotes.
I've tried a bunch of variations on the sed line:
input=sed -r 's/[“”]/"/g' $input
$input=sed -ir "s/[’]/'/g" $input
Per the suggestion at Can I use sed to manipulate a variable in bash? I also tried input=$(sed -r 's/[“”]/"/g' <<< $input) and I get an error: "Syntax error: redirection unexpected"
But none manages to actually change $input -- what is the correct syntax to change $input with sed?
Note: I accepted an answer that resolved the question I asked, but as you can see from the comments there are a couple of other issues here. enscript is taking in a whole file as a variable, not just the text of the file. So trying to tweak the text inside the file is going to take a few extra steps. I'm still learning.
On Editing Variables In General
BashFAQ #21 is a comprehensive reference on performing search-and-replace operations in bash, including within variables, and is thus recommended reading. On this particular case:
Use the shell's native string manipulation instead; this is far higher performance than forking off a subshell, launching an external process inside it, and reading that external process's output. BashFAQ #100 covers this topic in detail, and is well worth reading.
Depending on your version of bash and configured locale, it might be possible to use a bracket expression (ie. [“”], as your original code did). However, the most portable thing is to treat “ and ” separately, which will work even without multi-byte character support available.
input='“hello ’cruel’ world”'
input=${input//'“'/'"'}
input=${input//'”'/'"'}
input=${input//'’'/"'"}
printf '%s\n' "$input"
...correctly outputs:
"hello 'cruel' world"
On Using sed
To provide a literal answer -- you almost had a working sed-based approach in your question.
input=$(sed -r 's/[“”]/"/g' <<<"$input")
...adds the missing syntactic double quotes around the parameter expansion of $input, ensuring that it's treated as a single token regardless of how it might be string-split or glob-expanded.
But All That May Not Help...
The below is mentioned because your test script is manipulating content passed on the command line; if that's not the case in production, you can probably disregard the below.
If your script is invoked as ./yourscript “hello * ’cruel’ * world”, then information about exactly what the user entered is lost before the script is started, and nothing you can do here will fix that.
This is because $1, in that scenario, will only contain “hello; ’cruel’ and world” are in their own argv locations, and the *s will have been replaced with lists of files in the current directory (each such file substituted as a separate argument) before the script was even started. Because the shell responsible for parsing the user's command line (which is not the same shell running your script!) did not recognize the quotes as valid at the time when it ran this parsing, by the time the script is running, there's nothing you can do to recover the original data.
Abstract: The way to use sed to change a variable is explored, but what you really need is a way to use and edit a file. It is covered ahead.
Sed
The (two) sed line(s) could be solved with this (note that -i is not used, it is not a file but a value):
input='“very dirty”
we’re'
sed 's/[“”]/\"/g;s/’/'\''/g' <<<"$input"
But it should be faster (for small strings) to use the internals of the shell:
input='“very dirty”
we’re'
input=${input//[“”]/\"}
input=${input//[’]/\'}
printf '%s\n' "$input"
$1
But there is an underlying problem with your script, you are trying to clean an input received from the command line. You are using $1 as the source of the string. Once somebody writes:
./script “very dirty”
we’re
That input is lost. It is broken into shell's tokens and "$1" will be “very only.
But I do not believe that is what you really have.
file
However, you are also saying that the input comes from a file. If that is the case, then read it in with:
input="$(<infile)" # not $1
sed 's/[“”]/\"/g;s/’/'\''/g' <<<"$input"
Or, if you don't mind to edit (change) the file, do this instead:
sed -i 's/[“”]/\"/g;s/’/'\''/g' infile
input="$(<infile)"
Or, if you are clear and certain that what is being given to the script is a filename, like:
./script infile
You can use:
infile="$1"
sed -i 's/[“”]/\"/g;s/’/'\''/g' "$infile"
input="$(<"$infile")"
Other comments:
Then:
Quote your variables.
Do not use the very old `…` syntax, use $(…) instead.
Do not use variables in UPPER case, those are reserved for environment variables.
And (unless you actually meant sh) use a shebang (first line) that targets bash.
The command enscript most definitively requires a file, not a variable.
Maybe you should use evince to open the PS file, there is no need of the step to make a pdf, unless you know you really need it.
I believe that is better use a file to store the output of enscript and ps2pdf.
Do not hide the errors printed by the commands until everything is working as desired, then, just call the script as:
./script infile 2>/dev/null
Or as required to make it less verbose.
Final script.
If you call the script with the name of the file that enscript is going to use, something like:
./script infile
Then, the whole script will look like this (runs both in bash or sh):
#!/usr/bin/env bash
Usage(){ echo "$0; This script require a source file"; exit 1; }
[ $# -lt 1 ] && Usage
[ ! -e $1 ] && Usage
infile="$1"
pdir="$HOME/Desktop"
open_pdf=evince
# Straighten out curly quotes
sed -i 's/[“”]/\"/g;s/’/'\''/g' "$infile"
tmpfile="$(mktemp "$pdir"/mutt_XXXXXXXX.pdf)"
outfile="${tmpfile%.*}.ps"
enscript --font=Courier10 "$infile" -2r \
--word-wrap --fancy-header=mutt -p "$outfile"
ps2pdf "$outfile" "$tmpfile"
"$open_pdf" "$tmpfile" >/dev/null 2>&1 &
sleep 5
rm "$tmpfile" "$outfile"
This is the bash script.
Counter.sh:
#!/bin/bash
rm -rf home/pi/temp.mp3
cd /home/pi/
now=$(date +"%d-%b-%Y")
count="countshift1.sh"
mkdir $(date '+%d-%b-%Y')
On row 5 of this script, the count variable... I just want to know how to use AWK to change the integer 1 (the 18th character, thanks for the response) into a 3 and then save the Counter.sh file.
This is basically http://mywiki.wooledge.org/BashFAQ/050 -- assuming your script actually does something with $count somewhere further down, you should probably refactor that to avoid this antipattern. See the linked FAQ for much more on this topic.
Having said that, it's not hard to do what you are asking here without making changes to live code. Consider something like
awk 'END { print 5 }' /dev/null > file
in a cron job or similar (using Awk just because your question asks for it, not because it's the best tool for this job) and then in your main script, using that file;
read index <file
count="countshift$index.sh"
While this superficially removes the requirement to change the script on the fly (which is a big win) you still have another pesky problem (code in a variable!), and you should probably find a better way to solve it.
I don't think awk is the ideal tool for that. There are many ways to do it.
I would use Perl.
perl -pi -e 's/countshift1/countshift3/' Counter.sh
I'm completely new to bash script writing. I have a query that generates an output. I need to add that output to another bash script, then run that second bash script. After that, move on to the next step in the main script. I've put this together after spending hours searching on the internet. I know that its far from correct, but I'm stuck.
#!/bin/bash
<<EOF
\c database;
COPY (
select 'cp ./branding/'||value||' ./customization/'||replace(value,'curric-mobile:','') from branding_resource
where media_type like 'image%'
and name like 'curric-mobile%'
and account_uid = 'b1b08a9e-9310-41ce-b2cb-6b4d590c8104')
TO '/tmp/t1.sh';
EOF
if [[ -f /tmp/t1.sh ]]
chmod +x /tmp/t1.sh
/tmp/t1.sh | tee -a /tmp/t1.log
The output of the script should look like:
cp ./branding/curric-mobile:c9bec2f6-0b13-4243-8176-d0dc27774fd9.png ./customization/c9bec2f6-0b13-4243-8176-d0dc27774fd9.png
cp ./branding/curric-mobile:6f3d3554-03bc-4771-9069-60c82ebc64c5.jpg ./customization/6f3d3554-03bc-4771-9069-60c82ebc64c5.jpg
cp ./branding/curric-mobile:f32aae31-5ef6-4a1c-893f-8d7bbd560707.png ./customization/f32aae31-5ef6-4a1c-893f-8d7bbd560707.png
cp ./branding/curric-mobile:4a1c88a8-60c0-4878-9de2-fa141fec3391.png ./customization/4a1c88a8-60c0-4878-9de2-fa141fec3391.png
...
Thanks!
This is still broken, because it needs to have the query amended to exclude values which contain \x01 or \x02 characters, but it's a little closer to something reasonable:
#!/bin/bash
while IFS=$'\x02' read -r -d $'\x01' -a fields; do
cp -- "./branding/${fields[0]}" "./customization/${fields[0]#curric-mobile:}"
done < <(psql -q -t -R $'\x01' -F $'\x02' --pset='format=unaligned' <<'EOF'
SELECT value
FROM branding_resource
WHERE media_type like 'image%'
AND name like 'curric-mobile%'
AND account_uid = 'b1b08a9e-9310-41ce-b2cb-6b4d590c8104'
EOF
)
If you don't set PGDATABASE, PGUSER, and other relevant environment variables, add command-line arguments with the appropriate data as necessary.
Note that we aren't trying to generate shell scripts from inside the database at all, but merely pulling out data (in a well-defined format), and running commands based on that data.
This prevents shell injection attacks, which the other approach was badly prone to. Even if a value column in branding_resource contains something like $(rm -rf /), there's no risk of that code being executed as given here.