How can I pass input to output in bash? - bash

I am trying to streamline a README, where I can easily pass commands and their outputs to a document. This step seems harder than I thought it would be.
I am trying to pass the input and output to a file, but everything I am trying just either displays echo test or test
The latest iteration, which is becoming absurd is:
echo test | xargs echo '#' | cat <(echo) <(cat -) just shows # test
I would like the results to be:
echo test
# test

You can make a bash function to demonstrate a command and its output like this:
democommand() {
printf '#'
printf ' %q' "$#"
printf '\n'
"$#"
}
This prints "#", then each argument the function was passed (i.e. the command and its arguments) with a space before each one (and the %q makes it quote/escape them as needed), then a newline, and then finally it runs all of its arguments as a command. Here's an example:
$ democommand echo test
# echo test
$ democommand ls
# ls
Desktop Downloads Movies Pictures Sites
Documents Library Music Public
Now, as for why your command didn't work... well, I'm not clear what you thought it was doing, but here's what it's actually doing:
The first command in the pipeline, echo test, simply prints the string "test" to its standard output, which is piped to the next command in the chain.
'xargs echo '#'takes its input ("test") and adds it to the command it's given (echo '#') as additional arguments. Essentially, it executes the commandecho '#' test`. This outputs "# test" to the next command in the chain.
cat <(echo) <(cat -) is rather complicated, so let's break it down:
echo prints a blank line
cat - simply copies its input (which is at this point in the pipeline is still coming from the output of the xargs command, i.e. "# test").
cat <(echo) <(cat -) takes the output of those two <() commands and concatenates them together, resulting in a blank line followed by "# test".

Pass the command as a literal string so that you can both print and evaluate it:
doc() { printf '$ %s\n%s\n' "$1" "$(eval "$1")"; }
Running:
doc 'echo foo | tr f c' > myfile
Will make myfile contain:
$ echo foo | tr f c
coo

Related

How to parse multiple line output as separate variables

I'm relatively new to bash scripting and I would like someone to explain this properly, thank you. Here is my code:
#! /bin/bash
echo "first arg: $1"
echo "first arg: $2"
var="$( grep -rnw $1 -e $2 | cut -d ":" -f1 )"
var2=$( grep -rnw $1 -e $2 | cut -d ":" -f1 | awk '{print substr($0,length,1)}')
echo "$var"
echo "$var2"
The problem I have is with the output, the script I'm trying to write is a c++ function searcher, so upon launching my script I have 2 arguments, one for the directory and the second one as the function name. This is how my output looks like:
first arg: Projekt
first arg: iseven
Projekt/AX/include/ax.h
Projekt/AX/src/ax.cpp
h
p
Now my question is: how do can I save the line by line output as a variable, so that later on I can use var as a path, or to use var2 as a character to compare. My plan was to use IF() statements to determine the type, idea: IF(last_char == p){echo:"something"}What I've tried was this question: Capturing multiple line output into a Bash variable and then giving it an array. So my code looked like: "${var[0]}". Please explain how can I use my line output later on, as variables.
I'd use readarray to populate an array variable just in case there's spaces in your command's output that shouldn't be used as field separators that would end up messing up foo=( ... ). And you can use shell parameter expansion substring syntax to get the last character of a variable; no need for that awk bit in your var2:
#!/usr/bin/env bash
readarray -t lines < <(printf "%s\n" "Projekt/AX/include/ax.h" "Projekt/AX/src/ax.cpp")
for line in "${lines[#]}"; do
printf "%s\n%s\n" "$line" "${line: -1}" # Note the space before the -1
done
will display
Projekt/AX/include/ax.h
h
Projekt/AX/src/ax.cpp
p

Modify bash variable with sed

Why doesn't the follow bash script work? I would like it to output
two lines like this:
XXXXXXX
YYYYYYY
It works if I change the sed line to use a filename instead of the variable, but I want to use the variable.
#!/bin/bash
input=$(echo -e '=======\n-------\n')
for sym in = -; do
if [ "$sym" == '-' ]; then
replace=Y
else
replace=X
fi
printf "%s\n" "s/./$replace/g"
done | sed -f- <<<"$input"
The main problem is that you're giving sed two sources to read standard input from: the for loop that is fed through the pipe, and the variable coming through the here-string. As it turns out, the here-string gets precedence and sed complains that there are extra characters after a command (= is a command).
Instead of a here-string, you could use process substitution:
for sym in = -; do
if [ "$sym" == '-' ]; then
replace=Y
else
replace=X
fi
printf "%s\n" "s/./$replace/g"
done | sed -f- <(printf '%s\n' '=======' '-------')
You'll notice that the output isn't what you want, though, namely
YYYYYYY
YYYYYYY
This is because the sed script you end up with looks like this:
s/./X/g
s/./Y/g
No matter what you do first, the last command replaces everything with Y.

Sed variable too long

I need to substitute a unique string in a json file: {FILES} by a bash variable that contains thousands of paths: ${FILES}
sed -i "s|{FILES}|$FILES|" ./myFile.json
What would be the most elegant way to achieve that ? The content of ${FILES} is a result of an "aws s3" command. The content would look like :
FILES="/file1.ipk, /file2.ipk, /subfolder1/file3.ipk, /subfolder2/file4.ipk, ..."
I can't think of a solution where xargs would help me.
The safest way is probably to let Bash itself expand the variable. You can create a Bash script containing a here document with the full contents of myFile.json, with the placeholder {FILES} replaced by a reference to the variable $FILES (not the contents itself). Execution of this script would generate the output you seek.
For example, if myFile.json would contain:
{foo: 1, bar: "{FILES}"}
then the script should be:
#!/bin/bash
cat << EOF
{foo: 1, bar: "$FILES"}
EOF
You can generate the script with a single sed command:
sed -e '1i#!/bin/bash\ncat << EOF' -e 's/\$/\\$/g;s/{FILES}/$FILES/' -e '$aEOF' myFile.json
Notice sed is doing two replacements; the first one (s/\$/\\$/g) to escape any dollar signs that might occur within the JSON data (replace every $ by \$). The second replaces {FILES} by $FILES; the literal text $FILES, not the contents of the variable.
Now we can combine everything into a single Bash one-liner that generates the script and immediately executes it by piping it to Bash:
sed -e '1i#!/bin/bash\ncat << EOF' -e 's/\$/\\$/g;s/{FILES}/$FILES/' -e '$aEOF' myFile.json | /bin/bash
Or even better, execute the script without spawning a subshell (useful if $FILES is set without export):
sed -e '1i#!/bin/bash\ncat << EOF' -e 's/\$/\\$/g;s/{FILES}/$FILES/' -e '$aEOF' myFile.json | source /dev/stdin
Output:
{foo: 1, bar: "/file1.ipk, /file2.ipk, /subfolder1/file3.ipk, /subfolder2/file4.ipk, ..."}
Maybe perl would have fewer limitations?
perl -pi -e "s#{FILES}#${FILES}#" ./myFile.json
It's a little gross, but you can do it all within shell...
while read l
do
if ! echo "$l" | grep -q '{DATA}'
then
echo "$l"
else
echo "$l" | sed 's/{DATA}.*$//'
echo "$FILES"
echo "$l" | sed 's/^.*{DATA}//'
fi
done <./myfile.json >newfile.json
#mv newfile.json myfile.json
Obviously I'd leave the final line commented until you were confident it worked...
Maybe just don't do it? Can you just :
echo "var f = " > myFile2.json
echo $FILES >> myFile2.json
And reference myFile2.json from within your other json file? (You should put the global f variable into a namespace if this works for you.)
Instead of putting all those variables in an environment variable, put them in a file. Then read that file in perl:
foo.pl:
open X, "$ARGV[0]" or die "couldn't open";
shift;
$foo = <X>;
while (<>) {
s/world/$foo/;
print;
}
Command to run:
aws s3 ... >/tmp/myfile.$$
perl foo.pl /tmp/myfile.$$ <myFile.json >newFile.json
Hopefully that will bypass the limitations of the environment variable space and the argument length by pulling all the processing within perl itself.

Read a file and replace ${1}, ${2}... value with string

I have a file template.txt and its content is below:
param1=${1}
param2=${2}
param3=${3}
I want to replace ${1},{2},${3}...${n} string values by elements of scriptParams variable.
The below code, only replaces first line.
scrpitParams="test1,test2,test3"
cat template.txt | for param in ${scriptParams} ; do i=$((++i)) ; sed -e "s/\${$i}/$param/" ; done
RESULT:
param1=test1
param2=${2}
param3=${3}
EXPECTED:
param1=test1
param2=test2
param3=test3
Note: I don't want to save replaced file, want to use its replaced value.
If you intend to use an array, use a real array. sed is not needed either:
$ cat template
param1=${1}
param2=${2}
param3=${3}
$ scriptParams=("test one" "test two" "test three")
$ while read -r l; do for((i=1;i<=${#scriptParams[#]};i++)); do l=${l//\$\{$i\}/${scriptParams[i-1]}}; done; echo "$l"; done < template
param1=test one
param2=test two
param3=test three
Learn to debug:
cat template.txt | for param in ${scriptParams} ; do i=$((++i)) ; echo $i - $param; done
1 - test1,test2,test3
Oops..
scriptParams="test1 test2 test3"
cat template.txt | for param in ${scriptParams} ; do i=$((++i)) ; echo $i - $param; done
1 - test1
2 - test2
3 - test3
Ok, looks better...
cat template.txt | for param in ${scriptParams} ; do i=$((++i)) ; sed -e "s/\${$i}/$param/" ; done
param1=test1
param2=${2}
param3=${3}
Ooops... so what's the problem? Well, the first sed command "eats" all the input. You haven't built a pipeline, where one sed command feeding the next... You have three seds trying to read the same input. Obviously the first one processed the whole input.
Ok, let's take a different approach, let's create the arguments for a single sed command (note: the "" is there to force echo not to interpret -e as an command line switch).
sedargs=$(for param in ${scriptParams} ; do i=$((++i)); echo "" -e "s/\${$i}/$param/"; done)
cat template.txt | sed $sedargs
param1=test1
param2=test2
param3=test3
That's it. Note that this isn't perfect, you can have all sort of problems if the replace texts are complex (e.g.: contain space).
Let me think how to do this in a better way... (well, the obvious solution which comes to mind is not to use a shell script for this task...)
Update:
If you want to build a proper pipeline, here are some solutions: How to make a pipe loop in bash
You can do that with just bash alone:
#!/bin/bash
scriptParams=("test1" "test2" "test3") ## Better store it as arrays.
while read -r line; do
for i in in "${!scriptParams[#]}"; do ## Indices of array scriptParams would be populated to i starting at 0.
line=${line/"\${$((i + 1))}"/"${scriptParams[i]}"} ## ${var/p/r} replaces patterns (p) with r in the contents of var. Here we also add 1 to the index to fit with the targets.
done
echo "<br>$line</br>"
done < template.txt
Save it in a script and run bash script.sh to get an output like this:
<br>param1=test1</br>
<br>param2=test2</br>
<br>param3=test3</br>

How can I expand arguments to a bash function into a chain of piped commands?

I often find myself doing something like this a lot:
something | grep cat | grep bat | grep rat
when all I recall is that those three words must have occurred somewhere, in some order, in the output of something...Now, i could do something like this:
something | grep '.*cat.*bat.*rat.*'
but that implies ordering (bat appears after cat). As such, I was thinking of adding a bash function to my environment called mgrep which would turn:
mgrep cat bat rat
into
grep cat | grep bat | grep rat
but I'm not quite sure how to do it (or whether there is an alternative?). One idea would be to for loop over the parameters like so:
while (($#)); do
grep $1 some_thing > some_thing
shift
done
cat some_thing
where some_thing is possibly some fifo like when one does >(cmd) in bash but I'm not sure. How would one proceed?
I believe you could generate a pipeline one command at a time, by redirecting stdin at each step. But it's much simpler and cleaner to generate your pipeline as a string and execute it with eval, like this:
CMD="grep '$1' " # consume the first argument
shift
for arg in "$#" # Add the rest in a pipeline
do
CMD="$CMD | grep '$arg'"
done
eval $CMD
This will generate a pipeline of greps that always reads from standard input, as in your model. Note that it protects spaces in quoted arguments, so that it works correctly if you write:
mgrep 'the cat' 'the bat' 'the rat'
Thanks to Alexis, this is what I did:
function mgrep() #grep multiple keywords
{
CMD=''
while (($#)); do
CMD="$CMD grep \"$1\" | "
shift
done
eval ${CMD%| }
}
You can write a recursive function; I'm not happy with the base case, but I can't think of a better one. It seems a waste to need to call cat just to pass standard input to standard output, and the while loop is a bit inelegant:
mgrep () {
local e=$1;
# shift && grep "$e" | mgrep "$#" || while read -r; do echo "$REPLY"; done
shift && grep "$e" | mgrep "$#" || cat
# Maybe?
# shift && grep "$e" | mgrep "$#" || echo "$(</dev/stdin)"
}

Resources