If I have this data in my shell script:
DIR=/opt/app/classes
JARS=a.jar:b.jar:c.jar
How can I combine this to the string
/opt/app/classes/a.jar:/opt/app/classes/b.jar:/opt/app/classes/c.jar
in Shell/Bash scripting?
Here's a very short one:
$ echo "$DIR/${JARS//:/:$DIR/}"
/opt/app/classes/a.jar:/opt/app/classes/b.jar:/opt/app/classes/c.jar
If you don't mind an extra semicolon at the end:
[~]> for a in `echo $JARS | tr ":" "\n"`;do echo -n $DIR/$a:;done&&echo
/opt/app/classes/a.jar:/opt/app/classes/b.jar:/opt/app/classes/c.jar:
Use translate and iterate through the results. Then trim the result ':' character at the beginning of the string.
#! /bin/bash
DIR=/opt/app/classes
JARS=a.jar:b.jar:c.jar
for i in $(echo $JARS | tr ":" "\n")
do
result=$result:$DIR/$i
done
echo ${result#:} // Remove the starting :
Result:
/opt/app/classes/a.jar:/opt/app/classes/b.jar:/opt/app/classes/c.jar
Pure Optimized Bash 1-liner
IFS=:; set -- $JARS; for jar; do path+=$DIR/${jar}:; done; echo "$path"
Output
/opt/app/classes/a.jar:/opt/app/classes/b.jar:/opt/app/classes/c.jar:
Pure Bash, no external utilities:
saveIFS=$IFS
IFS=:
jararr=($JARS)
echo "${jararr[*]/#/$DIR/}"
IFS=saveIFS
Original Answer (before question was revised):
IFS=: read -ra jararr <<<"$JARS"
newarr=(${jararr[#]/#/$DIR/})
echo "${newarr[0]}:${newarr[1]}"
Related
I have been working in bash, and need to create a string argument. bash is a newish for me, to the point that I dont know how to build a string in bash from a list.
// foo.txt is a list of abs file names.
/foo/bar/a.txt
/foo/bar/b.txt
/delta/test/b.txt
should turn into: a.txt,b.txt,b.txt
OR: /foo/bar/a.txt,/foo/bar/b.txt,/delta/test/b.txt
code
s = ""
for file in $(cat foo.txt);
do
#what goes here? s += $file ?
done
myShellScript --script $s
I figure there was an easy way to do this.
with for loop:
for file in $(cat foo.txt);do echo -n "$file",;done|sed 's/,$/\n/g'
with tr:
cat foo.txt|tr '\n' ','|sed 's/,$/\n/g'
only sed:
sed ':a;N;$!ba;s/\n/,/g' foo.txt
This seems to work:
#!/bin/bash
input="foo.txt"
while IFS= read -r var
do
basename $var >> tmp
done < "$input"
paste -d, -s tmp > result.txt
output: a.txt,b.txt,b.txt
basename gets you the file names you need and paste will put them in the order you seem to need.
The input field separator can be used with set to create split/join functionality:
# split the lines of foo.txt into positional parameters
IFS=$'\n'
set $(< foo.txt)
# join with commas
IFS=,
echo "$*"
For just the file names, add some sed:
IFS=$'\n'; set $(sed 's|.*/||' foo.txt); IFS=,; echo "$*"
Suppose I have a Unix shell variable as below
variable=abc,def,ghij
I want to extract all the values (abc, def and ghij) using a for loop and pass each value into a procedure.
The script should allow extracting arbitrary number of comma-separated values from $variable.
Not messing with IFS
Not calling external command
variable=abc,def,ghij
for i in ${variable//,/ }
do
# call your procedure/other scripts here below
echo "$i"
done
Using bash string manipulation http://www.tldp.org/LDP/abs/html/string-manipulation.html
You can use the following script to dynamically traverse through your variable, no matter how many fields it has as long as it is only comma separated.
variable=abc,def,ghij
for i in $(echo $variable | sed "s/,/ /g")
do
# call your procedure/other scripts here below
echo "$i"
done
Instead of the echo "$i" call above, between the do and done inside the for loop, you can invoke your procedure proc "$i".
Update: The above snippet works if the value of variable does not contain spaces. If you have such a requirement, please use one of the solutions that can change IFS and then parse your variable.
If you set a different field separator, you can directly use a for loop:
IFS=","
for v in $variable
do
# things with "$v" ...
done
You can also store the values in an array and then loop through it as indicated in How do I split a string on a delimiter in Bash?:
IFS=, read -ra values <<< "$variable"
for v in "${values[#]}"
do
# things with "$v"
done
Test
$ variable="abc,def,ghij"
$ IFS=","
$ for v in $variable
> do
> echo "var is $v"
> done
var is abc
var is def
var is ghij
You can find a broader approach in this solution to How to iterate through a comma-separated list and execute a command for each entry.
Examples on the second approach:
$ IFS=, read -ra vals <<< "abc,def,ghij"
$ printf "%s\n" "${vals[#]}"
abc
def
ghij
$ for v in "${vals[#]}"; do echo "$v --"; done
abc --
def --
ghij --
I think syntactically this is cleaner and also passes shell-check linting
variable=abc,def,ghij
for i in ${variable//,/ }
do
# call your procedure/other scripts here below
echo "$i"
done
#/bin/bash
TESTSTR="abc,def,ghij"
for i in $(echo $TESTSTR | tr ',' '\n')
do
echo $i
done
I prefer to use tr instead of sed, becouse sed have problems with special chars like \r \n in some cases.
other solution is to set IFS to certain separator
Another solution not using IFS and still preserving the spaces:
$ var="a bc,def,ghij"
$ while read line; do echo line="$line"; done < <(echo "$var" | tr ',' '\n')
line=a bc
line=def
line=ghij
Here is an alternative tr based solution that doesn't use echo, expressed as a one-liner.
for v in $(tr ',' '\n' <<< "$var") ; do something_with "$v" ; done
It feels tidier without echo but that is just my personal preference.
The following solution:
doesn't need to mess with IFS
doesn't need helper variables (like i in a for-loop)
should be easily extensible to work for multiple separators (with a bracket expression like [:,] in the patterns)
really splits only on the specified separator(s) and not - like some other solutions presented here on e.g. spaces too.
is POSIX compatible
doesn't suffer from any subtle issues that might arise when bash’s nocasematch is on and a separator that has lower/upper case versions is used in a match like with ${parameter/pattern/string} or case
beware that:
it does however work on the variable itself and pop each element from it - if that is not desired, a helper variable is needed
it assumes var to be set and would fail if it's not and set -u is in effect
while true; do
x="${var%%,*}"
echo $x
#x is not really needed here, one can of course directly use "${var%%:*}"
if [ -z "${var##*,*}" ] && [ -n "${var}" ]; then
var="${var#*,}"
else
break
fi
done
Beware that separators that would be special characters in patterns (e.g. a literal *) would need to be quoted accordingly.
Here's my pure bash solution that doesn't change IFS, and can take in a custom regex delimiter.
loop_custom_delimited() {
local list=$1
local delimiter=$2
local item
if [[ $delimiter != ' ' ]]; then
list=$(echo $list | sed 's/ /'`echo -e "\010"`'/g' | sed -E "s/$delimiter/ /g")
fi
for item in $list; do
item=$(echo $item | sed 's/'`echo -e "\010"`'/ /g')
echo "$item"
done
}
Try this one.
#/bin/bash
testpid="abc,def,ghij"
count=`echo $testpid | grep -o ',' | wc -l` # this is not a good way
count=`expr $count + 1`
while [ $count -gt 0 ] ; do
echo $testpid | cut -d ',' -f $i
count=`expr $count - 1 `
done
I want to count number of words from a String using Shell.
Suppose the String is:
input="Count from this String"
Here the delimiter is space ' ' and expected output is 4.
There can also be trailing space characters in the input string like "Count from this String ".
If there are trailing space in the String, it should produce the same output, that is 4. How can I do this?
echo "$input" | wc -w
Use wc -w to count the number of words.
Or as per dogbane's suggestion, the echo can be got rid of as well:
wc -w <<< "$input"
If <<< is not supported by your shell you can try this variant:
wc -w << END_OF_INPUT
$input
END_OF_INPUT
You don't need an external command like wc because you can do it in pure bash which is more efficient.
Convert the string into an array and then count the elements in the array:
$ input="Count from this String "
$ words=( $input )
$ echo ${#words[#]}
4
Alternatively, use set to set positional parameters and then count them:
$ input="Count from this String "
$ set -- $input
$ echo $#
4
Try the following one-liner:
echo $(c() { echo $#; }; c $input)
It basically defines c() function and passes $input as the argument, then $# returns number of elements in the argument separated by whitespace. To change the delimiter, you may change IFS (a special variable).
To do it in pure bash avoiding side-effects, do it in a sub-shell:
$ input="Count from this string "
$ echo $(IFS=' '; set -f -- $input; echo $#)
4
It works with other separators as well:
$ input="dog,cat,snake,billy goat,horse"
$ echo $(IFS=,; set -f -- $input; echo $#)
5
$ echo $(IFS=' '; set -f -- $input; echo $#)
2
Note the use of "set -f" which disables bash filename expansion in the subshell, so if the caller wants expansion it should be done beforehand (Hat Tip #mkelement0).
echo "$input" | awk '{print NF}'
function count_item() {
return $#
}
input="one two three"
count_item $input
n=$?
echo $n
NOTE: function parameter passing treat space as separated
argument, therefore $# works. $? is the return value
of the recently called function.
I'll just chime in with a perl one-liner (avoiding 'useless use of echo'):
perl -lane 'print scalar(#F)' <<< $input
It is efficient external command free way, like #dogbane's. But it works correctly with stars.
$ input="Count from *"
$ IFS=" " read -r -a words <<< "${input}"
$ echo ${#words[#]}
3
If input="Count from *" then words=( $input ) will invoke glob expansion. So size of words array will vary depending on count of files in current directory. So we use IFS=" " read -r -a words <<< "${input}" instead it.
see https://github.com/koalaman/shellcheck/wiki/SC2206
This question already has answers here:
How to concatenate multiple lines of output to one line?
(12 answers)
Closed 4 years ago.
I have a file csv :
data1,data2,data2
data3,data4,data5
data6,data7,data8
I want to convert it to (Contained in a variable):
variable=data1,data2,data2%0D%0Adata3,data4,data5%0D%0Adata6,data7,data8
My attempt :
data=''
cat csv | while read line
do
data="${data}%0D%0A${line}"
done
echo $data # Fails, since data remains empty (loop emulates a sub-shell and looses data)
Please help..
Simpler to just strip newlines from the file:
tr '\n' '' < yourfile.txt > concatfile.txt
In bash,
data=$(
while read line
do
echo -n "%0D%0A${line}"
done < csv)
In non-bash shells, you can use `...` instead of $(...). Also, echo -n, which suppresses the newline, is unfortunately not completely portable, but again this will work in bash.
Some of these answers are incredibly complicated. How about this.
data="$(xargs printf ',%s' < csv | cut -b 2-)"
or
data="$(tr '\n' ',' < csv | cut -b 2-)"
Too "external utility" for you?
IFS=$'\n', read -d'\0' -a data < csv
Now you have an array! Output it however you like, perhaps with
data="$(tr ' ' , <<<"${data[#]}")"
Still too "external utility?" Well fine,
data="$(printf "${data[0]}" ; printf ',%s' "${data[#]:1:${#data}}")"
Yes, printf can be a builtin. If it isn't but your echo is and it supports -n, use echo -n instead:
data="$(echo -n "${data[0]}" ; for d in "${data[#]:1:${#data[#]}}" ; do echo -n ,"$d" ; done)"
Okay, now I admit that I am getting a bit silly. Andrew's answer is perfectly correct.
I would much prefer a loop:
for line in $(cat file.txt); do echo -n $line; done
Note: This solution requires the input file to have a new line at the end of the file or it will drop the last line.
Another short bash solution
variable=$(
RS=""
while read line; do
printf "%s%s" "$RS" "$line"
RS='%0D%0A'
done < filename
)
awk 'END { print r }
{ r = r ? r OFS $0 : $0 }
' OFS='%0D%0A' infile
With shell:
data=
while IFS= read -r; do
[ -n "$data" ] &&
data=$data%0D%0A$REPLY ||
data=$REPLY
done < infile
printf '%s\n' "$data"
Recent bash versions:
data=
while IFS= read -r; do
[[ -n $data ]] &&
data+=%0D%0A$REPLY ||
data=$REPLY
done < infile
printf '%s\n' "$data"
A very simple single-line solution which requires no extra files as its quite easy to understand (I think, just cat the file together and perform sed-replace):
output=$(echo $(cat ./myFile.txt) | sed 's/ /%0D%0A/g')
Useless use of cat, punished! You want to feed the CSV into the loop
while read line; do
# ...
done < csv
my question seems to be general, but i can't find any answers.
In sed command, how can you replace the substitution pattern by a value returned by a simple bash function.
For instance, I created the following function :
function parseDates(){
#Some process here with $1 (the pattern found)
return "dateParsed;
}
and the folowing sed command :
myCatFile=`sed -e "s/[0-3][0-9]\/[0-1][0-9]\/[0-9][0-9]/& parseDates &\}/p" myfile`
I found that the caracter '&' represents the current pattern found, i'd like it to be passed to my bash function and the whole pattern to be substituted by the pattern found +dateParsed.
Does anybody have an idea ?
Thanks
you can use the "e" option in sed command like this:
cat t.sh
myecho() {
echo ">>hello,$1<<"
}
export -f myecho
sed -e "s/.*/myecho &/e" <<END
ni
END
you can see the result without "e":
cat t.sh
myecho() {
echo ">>hello,$1<<"
}
export -f myecho
sed -e "s/.*/myecho &/" <<END
ni
END
Agree with Glenn Jackman.
If you want to use bash function in sed, something like this :
sed -rn 's/^([[:digit:].]+)/`date -d #&`/p' file |
while read -r line; do
eval echo "$line"
done
My file here begins with a unix timestamp (e.g. 1362407133.936).
Bash function inside sed (maybe for other purposes):
multi_stdin(){ #Makes function accepet variable or stdin (via pipe)
[[ -n "$1" ]] && echo "$*" || cat -
}
sans_accent(){
multi_stdin "$#" | sed '
y/àáâãäåèéêëìíîïòóôõöùúûü/aaaaaaeeeeiiiiooooouuuu/
y/ÀÁÂÃÄÅÈÉÊËÌÍÎÏÒÓÔÕÖÙÚÛÜ/AAAAAAEEEEIIIIOOOOOUUUU/
y/çÇñÑߢÐð£Øø§µÝý¥¹²³ªº/cCnNBcDdLOoSuYyY123ao/
'
}
eval $(echo "Rogério Madureira" | sed -n 's#.*#echo & | sans_accent#p')
or
eval $(echo "Rogério Madureira" | sed -n 's#.*#sans_accent &#p')
Rogerio
And if you need to keep the output into a variable:
VAR=$( eval $(echo "Rogério Madureira" | sed -n 's#.*#echo & | desacentua#p') )
echo "$VAR"
do it step by step. (also you could use an alternate delimiter , such as "|" instead of "/"
function parseDates(){
#Some process here with $1 (the pattern found)
return "dateParsed;
}
value=$(parseDates)
sed -n "s|[0-3][0-9]/[0-1][0-9]/[0-9][0-9]|& $value &|p" myfile
Note the use of double quotes instead of single quotes, so that $value can be interpolated
I'd like to know if there's a way to do this too. However, for this particular problem you don't need it. If you surround the different components of the date with ()s, you can back reference them with \1 \2 etc and reformat however you want.
For instance, let's reverse 03/04/1973:
echo 03/04/1973 | sed -e 's/\([0-9][0-9]\)\/\([0-9][0-9]\)\/\([0-9][0-9][0-9][0-9]\)/\3\/\2\/\1/g'
sed -e 's#[0-3][0-9]/[0-1][0-9]/[0-9][0-9]#& $(parseDates &)#' myfile |
while read -r line; do
eval echo "$line"
done
You can glue together a sed-command by ending a single-quoted section, and reopening it again.
sed -n 's|[0-3][0-9]/[0-1][0-9]/[0-9][0-9]|& '$(parseDates)' &|p' datefile
However, in contrast to other examples, a function in bash can't return strings, only put them out:
function parseDates(){
# Some process here with $1 (the pattern found)
echo dateParsed
}