I have a file template.txt and its content is below:
param1=${1}
param2=${2}
param3=${3}
I want to replace ${1},{2},${3}...${n} string values by elements of scriptParams variable.
The below code, only replaces first line.
scrpitParams="test1,test2,test3"
cat template.txt | for param in ${scriptParams} ; do i=$((++i)) ; sed -e "s/\${$i}/$param/" ; done
RESULT:
param1=test1
param2=${2}
param3=${3}
EXPECTED:
param1=test1
param2=test2
param3=test3
Note: I don't want to save replaced file, want to use its replaced value.
If you intend to use an array, use a real array. sed is not needed either:
$ cat template
param1=${1}
param2=${2}
param3=${3}
$ scriptParams=("test one" "test two" "test three")
$ while read -r l; do for((i=1;i<=${#scriptParams[#]};i++)); do l=${l//\$\{$i\}/${scriptParams[i-1]}}; done; echo "$l"; done < template
param1=test one
param2=test two
param3=test three
Learn to debug:
cat template.txt | for param in ${scriptParams} ; do i=$((++i)) ; echo $i - $param; done
1 - test1,test2,test3
Oops..
scriptParams="test1 test2 test3"
cat template.txt | for param in ${scriptParams} ; do i=$((++i)) ; echo $i - $param; done
1 - test1
2 - test2
3 - test3
Ok, looks better...
cat template.txt | for param in ${scriptParams} ; do i=$((++i)) ; sed -e "s/\${$i}/$param/" ; done
param1=test1
param2=${2}
param3=${3}
Ooops... so what's the problem? Well, the first sed command "eats" all the input. You haven't built a pipeline, where one sed command feeding the next... You have three seds trying to read the same input. Obviously the first one processed the whole input.
Ok, let's take a different approach, let's create the arguments for a single sed command (note: the "" is there to force echo not to interpret -e as an command line switch).
sedargs=$(for param in ${scriptParams} ; do i=$((++i)); echo "" -e "s/\${$i}/$param/"; done)
cat template.txt | sed $sedargs
param1=test1
param2=test2
param3=test3
That's it. Note that this isn't perfect, you can have all sort of problems if the replace texts are complex (e.g.: contain space).
Let me think how to do this in a better way... (well, the obvious solution which comes to mind is not to use a shell script for this task...)
Update:
If you want to build a proper pipeline, here are some solutions: How to make a pipe loop in bash
You can do that with just bash alone:
#!/bin/bash
scriptParams=("test1" "test2" "test3") ## Better store it as arrays.
while read -r line; do
for i in in "${!scriptParams[#]}"; do ## Indices of array scriptParams would be populated to i starting at 0.
line=${line/"\${$((i + 1))}"/"${scriptParams[i]}"} ## ${var/p/r} replaces patterns (p) with r in the contents of var. Here we also add 1 to the index to fit with the targets.
done
echo "<br>$line</br>"
done < template.txt
Save it in a script and run bash script.sh to get an output like this:
<br>param1=test1</br>
<br>param2=test2</br>
<br>param3=test3</br>
Related
I'm relatively new to bash scripting and I would like someone to explain this properly, thank you. Here is my code:
#! /bin/bash
echo "first arg: $1"
echo "first arg: $2"
var="$( grep -rnw $1 -e $2 | cut -d ":" -f1 )"
var2=$( grep -rnw $1 -e $2 | cut -d ":" -f1 | awk '{print substr($0,length,1)}')
echo "$var"
echo "$var2"
The problem I have is with the output, the script I'm trying to write is a c++ function searcher, so upon launching my script I have 2 arguments, one for the directory and the second one as the function name. This is how my output looks like:
first arg: Projekt
first arg: iseven
Projekt/AX/include/ax.h
Projekt/AX/src/ax.cpp
h
p
Now my question is: how do can I save the line by line output as a variable, so that later on I can use var as a path, or to use var2 as a character to compare. My plan was to use IF() statements to determine the type, idea: IF(last_char == p){echo:"something"}What I've tried was this question: Capturing multiple line output into a Bash variable and then giving it an array. So my code looked like: "${var[0]}". Please explain how can I use my line output later on, as variables.
I'd use readarray to populate an array variable just in case there's spaces in your command's output that shouldn't be used as field separators that would end up messing up foo=( ... ). And you can use shell parameter expansion substring syntax to get the last character of a variable; no need for that awk bit in your var2:
#!/usr/bin/env bash
readarray -t lines < <(printf "%s\n" "Projekt/AX/include/ax.h" "Projekt/AX/src/ax.cpp")
for line in "${lines[#]}"; do
printf "%s\n%s\n" "$line" "${line: -1}" # Note the space before the -1
done
will display
Projekt/AX/include/ax.h
h
Projekt/AX/src/ax.cpp
p
Why doesn't the follow bash script work? I would like it to output
two lines like this:
XXXXXXX
YYYYYYY
It works if I change the sed line to use a filename instead of the variable, but I want to use the variable.
#!/bin/bash
input=$(echo -e '=======\n-------\n')
for sym in = -; do
if [ "$sym" == '-' ]; then
replace=Y
else
replace=X
fi
printf "%s\n" "s/./$replace/g"
done | sed -f- <<<"$input"
The main problem is that you're giving sed two sources to read standard input from: the for loop that is fed through the pipe, and the variable coming through the here-string. As it turns out, the here-string gets precedence and sed complains that there are extra characters after a command (= is a command).
Instead of a here-string, you could use process substitution:
for sym in = -; do
if [ "$sym" == '-' ]; then
replace=Y
else
replace=X
fi
printf "%s\n" "s/./$replace/g"
done | sed -f- <(printf '%s\n' '=======' '-------')
You'll notice that the output isn't what you want, though, namely
YYYYYYY
YYYYYYY
This is because the sed script you end up with looks like this:
s/./X/g
s/./Y/g
No matter what you do first, the last command replaces everything with Y.
I am trying to streamline a README, where I can easily pass commands and their outputs to a document. This step seems harder than I thought it would be.
I am trying to pass the input and output to a file, but everything I am trying just either displays echo test or test
The latest iteration, which is becoming absurd is:
echo test | xargs echo '#' | cat <(echo) <(cat -) just shows # test
I would like the results to be:
echo test
# test
You can make a bash function to demonstrate a command and its output like this:
democommand() {
printf '#'
printf ' %q' "$#"
printf '\n'
"$#"
}
This prints "#", then each argument the function was passed (i.e. the command and its arguments) with a space before each one (and the %q makes it quote/escape them as needed), then a newline, and then finally it runs all of its arguments as a command. Here's an example:
$ democommand echo test
# echo test
$ democommand ls
# ls
Desktop Downloads Movies Pictures Sites
Documents Library Music Public
Now, as for why your command didn't work... well, I'm not clear what you thought it was doing, but here's what it's actually doing:
The first command in the pipeline, echo test, simply prints the string "test" to its standard output, which is piped to the next command in the chain.
'xargs echo '#'takes its input ("test") and adds it to the command it's given (echo '#') as additional arguments. Essentially, it executes the commandecho '#' test`. This outputs "# test" to the next command in the chain.
cat <(echo) <(cat -) is rather complicated, so let's break it down:
echo prints a blank line
cat - simply copies its input (which is at this point in the pipeline is still coming from the output of the xargs command, i.e. "# test").
cat <(echo) <(cat -) takes the output of those two <() commands and concatenates them together, resulting in a blank line followed by "# test".
Pass the command as a literal string so that you can both print and evaluate it:
doc() { printf '$ %s\n%s\n' "$1" "$(eval "$1")"; }
Running:
doc 'echo foo | tr f c' > myfile
Will make myfile contain:
$ echo foo | tr f c
coo
I need to substitute a unique string in a json file: {FILES} by a bash variable that contains thousands of paths: ${FILES}
sed -i "s|{FILES}|$FILES|" ./myFile.json
What would be the most elegant way to achieve that ? The content of ${FILES} is a result of an "aws s3" command. The content would look like :
FILES="/file1.ipk, /file2.ipk, /subfolder1/file3.ipk, /subfolder2/file4.ipk, ..."
I can't think of a solution where xargs would help me.
The safest way is probably to let Bash itself expand the variable. You can create a Bash script containing a here document with the full contents of myFile.json, with the placeholder {FILES} replaced by a reference to the variable $FILES (not the contents itself). Execution of this script would generate the output you seek.
For example, if myFile.json would contain:
{foo: 1, bar: "{FILES}"}
then the script should be:
#!/bin/bash
cat << EOF
{foo: 1, bar: "$FILES"}
EOF
You can generate the script with a single sed command:
sed -e '1i#!/bin/bash\ncat << EOF' -e 's/\$/\\$/g;s/{FILES}/$FILES/' -e '$aEOF' myFile.json
Notice sed is doing two replacements; the first one (s/\$/\\$/g) to escape any dollar signs that might occur within the JSON data (replace every $ by \$). The second replaces {FILES} by $FILES; the literal text $FILES, not the contents of the variable.
Now we can combine everything into a single Bash one-liner that generates the script and immediately executes it by piping it to Bash:
sed -e '1i#!/bin/bash\ncat << EOF' -e 's/\$/\\$/g;s/{FILES}/$FILES/' -e '$aEOF' myFile.json | /bin/bash
Or even better, execute the script without spawning a subshell (useful if $FILES is set without export):
sed -e '1i#!/bin/bash\ncat << EOF' -e 's/\$/\\$/g;s/{FILES}/$FILES/' -e '$aEOF' myFile.json | source /dev/stdin
Output:
{foo: 1, bar: "/file1.ipk, /file2.ipk, /subfolder1/file3.ipk, /subfolder2/file4.ipk, ..."}
Maybe perl would have fewer limitations?
perl -pi -e "s#{FILES}#${FILES}#" ./myFile.json
It's a little gross, but you can do it all within shell...
while read l
do
if ! echo "$l" | grep -q '{DATA}'
then
echo "$l"
else
echo "$l" | sed 's/{DATA}.*$//'
echo "$FILES"
echo "$l" | sed 's/^.*{DATA}//'
fi
done <./myfile.json >newfile.json
#mv newfile.json myfile.json
Obviously I'd leave the final line commented until you were confident it worked...
Maybe just don't do it? Can you just :
echo "var f = " > myFile2.json
echo $FILES >> myFile2.json
And reference myFile2.json from within your other json file? (You should put the global f variable into a namespace if this works for you.)
Instead of putting all those variables in an environment variable, put them in a file. Then read that file in perl:
foo.pl:
open X, "$ARGV[0]" or die "couldn't open";
shift;
$foo = <X>;
while (<>) {
s/world/$foo/;
print;
}
Command to run:
aws s3 ... >/tmp/myfile.$$
perl foo.pl /tmp/myfile.$$ <myFile.json >newFile.json
Hopefully that will bypass the limitations of the environment variable space and the argument length by pulling all the processing within perl itself.
I have a variable like this:
words="这是一条狗。"
I want to make a for loop on each of the characters, one at a time, e.g. first character="这", then character="是", character="一", etc.
The only way I know is to output each character to separate line in a file, then use while read line, but this seems very inefficient.
How can I process each character in a string through a for loop?
You can use a C-style for loop:
foo=string
for (( i=0; i<${#foo}; i++ )); do
echo "${foo:$i:1}"
done
${#foo} expands to the length of foo. ${foo:$i:1} expands to the substring starting at position $i of length 1.
With sed on dash shell of LANG=en_US.UTF-8, I got the followings working right:
$ echo "你好嗎 新年好。全型句號" | sed -e 's/\(.\)/\1\n/g'
你
好
嗎
新
年
好
。
全
型
句
號
and
$ echo "Hello world" | sed -e 's/\(.\)/\1\n/g'
H
e
l
l
o
w
o
r
l
d
Thus, output can be looped with while read ... ; do ... ; done
edited for sample text translate into English:
"你好嗎 新年好。全型句號" is zh_TW.UTF-8 encoding for:
"你好嗎" = How are you[ doing]
" " = a normal space character
"新年好" = Happy new year
"。全型空格" = a double-byte-sized full-stop followed by text description
${#var} returns the length of var
${var:pos:N} returns N characters from pos onwards
Examples:
$ words="abc"
$ echo ${words:0:1}
a
$ echo ${words:1:1}
b
$ echo ${words:2:1}
c
so it is easy to iterate.
another way:
$ grep -o . <<< "abc"
a
b
c
or
$ grep -o . <<< "abc" | while read letter; do echo "my letter is $letter" ; done
my letter is a
my letter is b
my letter is c
I'm surprised no one has mentioned the obvious bash solution utilizing only while and read.
while read -n1 character; do
echo "$character"
done < <(echo -n "$words")
Note the use of echo -n to avoid the extraneous newline at the end. printf is another good option and may be more suitable for your particular needs. If you want to ignore whitespace then replace "$words" with "${words// /}".
Another option is fold. Please note however that it should never be fed into a for loop. Rather, use a while loop as follows:
while read char; do
echo "$char"
done < <(fold -w1 <<<"$words")
The primary benefit to using the external fold command (of the coreutils package) would be brevity. You can feed it's output to another command such as xargs (part of the findutils package) as follows:
fold -w1 <<<"$words" | xargs -I% -- echo %
You'll want to replace the echo command used in the example above with the command you'd like to run against each character. Note that xargs will discard whitespace by default. You can use -d '\n' to disable that behavior.
Internationalization
I just tested fold with some of the Asian characters and realized it doesn't have Unicode support. So while it is fine for ASCII needs, it won't work for everyone. In that case there are some alternatives.
I'd probably replace fold -w1 with an awk array:
awk 'BEGIN{FS=""} {for (i=1;i<=NF;i++) print $i}'
Or the grep command mentioned in another answer:
grep -o .
Performance
FYI, I benchmarked the 3 aforementioned options. The first two were fast, nearly tying, with the fold loop slightly faster than the while loop. Unsurprisingly xargs was the slowest... 75x slower.
Here is the (abbreviated) test code:
words=$(python -c 'from string import ascii_letters as l; print(l * 100)')
testrunner(){
for test in test_while_loop test_fold_loop test_fold_xargs test_awk_loop test_grep_loop; do
echo "$test"
(time for (( i=1; i<$((${1:-100} + 1)); i++ )); do "$test"; done >/dev/null) 2>&1 | sed '/^$/d'
echo
done
}
testrunner 100
Here are the results:
test_while_loop
real 0m5.821s
user 0m5.322s
sys 0m0.526s
test_fold_loop
real 0m6.051s
user 0m5.260s
sys 0m0.822s
test_fold_xargs
real 7m13.444s
user 0m24.531s
sys 6m44.704s
test_awk_loop
real 0m6.507s
user 0m5.858s
sys 0m0.788s
test_grep_loop
real 0m6.179s
user 0m5.409s
sys 0m0.921s
I believe there is still no ideal solution that would correctly preserve all whitespace characters and is fast enough, so I'll post my answer. Using ${foo:$i:1} works, but is very slow, which is especially noticeable with large strings, as I will show below.
My idea is an expansion of a method proposed by Six, which involves read -n1, with some changes to keep all characters and work correctly for any string:
while IFS='' read -r -d '' -n 1 char; do
# do something with $char
done < <(printf %s "$string")
How it works:
IFS='' - Redefining internal field separator to empty string prevents stripping of spaces and tabs. Doing it on a same line as read means that it will not affect other shell commands.
-r - Means "raw", which prevents read from treating \ at the end of the line as a special line concatenation character.
-d '' - Passing empty string as a delimiter prevents read from stripping newline characters. Actually means that null byte is used as a delimiter. -d '' is equal to -d $'\0'.
-n 1 - Means that one character at a time will be read.
printf %s "$string" - Using printf instead of echo -n is safer, because echo treats -n and -e as options. If you pass "-e" as a string, echo will not print anything.
< <(...) - Passing string to the loop using process substitution. If you use here-strings instead (done <<< "$string"), an extra newline character is appended at the end. Also, passing string through a pipe (printf %s "$string" | while ...) would make the loop run in a subshell, which means all variable operations are local within the loop.
Now, let's test the performance with a huge string.
I used the following file as a source:
https://www.kernel.org/doc/Documentation/kbuild/makefiles.txt
The following script was called through time command:
#!/bin/bash
# Saving contents of the file into a variable named `string'.
# This is for test purposes only. In real code, you should use
# `done < "filename"' construct if you wish to read from a file.
# Using `string="$(cat makefiles.txt)"' would strip trailing newlines.
IFS='' read -r -d '' string < makefiles.txt
while IFS='' read -r -d '' -n 1 char; do
# remake the string by adding one character at a time
new_string+="$char"
done < <(printf %s "$string")
# confirm that new string is identical to the original
diff -u makefiles.txt <(printf %s "$new_string")
And the result is:
$ time ./test.sh
real 0m1.161s
user 0m1.036s
sys 0m0.116s
As we can see, it is quite fast.
Next, I replaced the loop with one that uses parameter expansion:
for (( i=0 ; i<${#string}; i++ )); do
new_string+="${string:$i:1}"
done
The output shows exactly how bad the performance loss is:
$ time ./test.sh
real 2m38.540s
user 2m34.916s
sys 0m3.576s
The exact numbers may very on different systems, but the overall picture should be similar.
I've only tested this with ascii strings, but you could do something like:
while test -n "$words"; do
c=${words:0:1} # Get the first character
echo character is "'$c'"
words=${words:1} # trim the first character
done
It is also possible to split the string into a character array using fold and then iterate over this array:
for char in `echo "这是一条狗。" | fold -w1`; do
echo $char
done
The C style loop in #chepner's answer is in the shell function update_terminal_cwd, and the grep -o . solution is clever, but I was surprised not to see a solution using seq. Here's mine:
read word
for i in $(seq 1 ${#word}); do
echo "${word:i-1:1}"
done
#!/bin/bash
word=$(echo 'Your Message' |fold -w 1)
for letter in ${word} ; do echo "${letter} is a letter"; done
Here is the output:
Y is a letter
o is a letter
u is a letter
r is a letter
M is a letter
e is a letter
s is a letter
s is a letter
a is a letter
g is a letter
e is a letter
To iterate ASCII characters on a POSIX-compliant shell, you can avoid external tools by using the Parameter Expansions:
#!/bin/sh
str="Hello World!"
while [ ${#str} -gt 0 ]; do
next=${str#?}
echo "${str%$next}"
str=$next
done
or
str="Hello World!"
while [ -n "$str" ]; do
next=${str#?}
echo "${str%$next}"
str=$next
done
sed works with unicode
IFS=$'\n'
for z in $(sed 's/./&\n/g' <(printf '你好嗎')); do
echo hello: "$z"
done
outputs
hello: 你
hello: 好
hello: 嗎
Another approach, if you don't care about whitespace being ignored:
for char in $(sed -E s/'(.)'/'\1 '/g <<<"$your_string"); do
# Handle $char here
done
Another way is:
Characters="TESTING"
index=1
while [ $index -le ${#Characters} ]
do
echo ${Characters} | cut -c${index}-${index}
index=$(expr $index + 1)
done
fold and while read are great for the job as shown in some answers here. Contrary to those answers, I think it's much more intuitive to pipe in the order of execution:
echo "asdfg" | fold -w 1 | while read c; do
echo -n "$c "
done
Outputs: a s d f g
I share my solution:
read word
for char in $(grep -o . <<<"$word") ; do
echo $char
done
TEXT="hello world"
for i in {1..${#TEXT}}; do
echo ${TEXT[i]}
done
where {1..N} is an inclusive range
${#TEXT} is a number of letters in a string
${TEXT[i]} - you can get char from string like an item from an array