Concatenate strings in bash - bash

I have in a bash script:
for i in `seq 1 10`
do
read AA BB CC <<< $(cat file1 | grep DATA)
echo ${i}
echo ${CC}
SORT=${CC}${i}
echo ${SORT}
done
so "i" is a integer, and CC is a string like "TODAY"
I would like to get then in SORT, "TODAY1", etc
But I get "1ODAY", "2ODAY" and so
Where is the error?
Thanks

You should try
SORT="${CC}${i}"
Make sure your file does not contain "\r" that would end just in the end of $CC.
This could well explain why you get "1ODAY".
Try including
|tr '\r' ''
after the cat command

try
for i in {1..10}
do
while read -r line
do
case "$line" in
*DATA* )
set -- $line
CC=$3
SORT=${CC}${i}
echo ${SORT}
esac
done <"file1"
done
Otherwise, show an example of file1 and your desired output

ghostdog is right: with the -r option, read avoids succumbing to potential horrors, like CRLFs. Using arrays makes the -r option more pleasant:
for i in `seq 1 10`
do
read -ra line <<< $(cat file1 | grep DATA)
CC="${line[3]}"
echo ${i}
echo ${CC}
SORT=${CC}${i}
echo ${SORT}
done

Related

Shell - Execute commands in external file between two patterns

I have got a question. How should I proceed and make this code print out and execute curl examples that I have on my external file?
How I want it to work is to match the pattern, get text between the patterns (without the pattern) and then execute it.
Is there way to do this?
Thanks for the help.
read -p "Enter a word: " instance
testfile=test.txt
case $instance in
loresipsum)
sed -n '/^loremipsum1/,${p;/^loremipsum2/q}' $testfile \
| while read -r line; do
makingcurlCall=$(eval "$line")
echo "makingcurlCall"
done < $testfile ;;
foobar)
sed -n '/^foobar1/,${p;/^foobar2/q}' $testfile \
| while read -r line; do
makingcurlCall=$(eval "$line")
echo "makingcurlCall"
done < $testfile ;;
*)
printf 'No match for "%s"\n' ":instance"
esac
Text file looks like this
loremipsum1
curl example1
curl example2
curl example3
loremipsum2
foobar1
curl foo
curl bar
curl foo
foobar2
You cannot have the while loop read from both the output of sed and directly from the file. Your current code is ignoring the output from sed and reading directly from the file. Perhaps refactor it like:
#!/bin/sh
instance=${1-loresipsum}
testfile=test.txt
case $instance in
loresipsum) sed -n '/^loremipsum1/,/^loremipsum2/p' "$testfile";;
foobar) sed -n '/^foobar1/,/^foobar2/p' "$testfile";;
*) echo "Error: no match" >&2;;
esac \
| sed -e 1d -e '$d' -e '/^\s*$/d' | while read -r line; do
# makingcurlCall=$(eval "$line")
echo "makingcurlCall: $line"
done

sed DON'T remove extra whitespace

It seems everybody else wants to remove any additional whitespace, however I have the opposite problem.
I have a file, call it some_file.txt that looks like
a b c d
and some more
and I'm reading it line-by-line with sed,
num_lines=$(cat some_file.txt | wc -l)
for i in $(seq 1 $num_lines); do
echo $(sed "${i}q;d" $file)
string=$(sed "${i}q;d" $file)
echo $string
done
I would expect the number of whitespace characters to stay the same, however the output I get is
a b c d
a b c d
and some more
and some more
So it seems that the problem is with sed removing the extra whitespace between chars, anyway to fix this?
Have a look at this example:
$ echo Hello World
Hello World
$ echo "Hello World"
Hello World
sed is not your problem, your problem is that bash removes the whitespaces when passing the output of sed into echo.
You just need to surround whatever echo is supposed to print with double quotation marks. So instead of
echo $(sed "${i}q;d" $file)
echo $string
You write
echo "$(sed "${i}q;d" $file)"
echo "$string"
The new script should look like this:
#!/usr/bin/env bash
file=some_file.txt
num_lines=$(cat some_file.txt | wc -l)
for i in $(seq 1 $num_lines); do
echo "$(sed "${i}q;d" $file)"
string=$(sed "${i}q;d" $file)
echo "$string"
done
prints the correct output:
a b c d
a b c d
and some more
and some more
However, if you just want to go through your file line by line, I strongly recommend something like this:
while IFS= read -r line; do
echo "$line"
done < some_file.txt
Question from the comments: What to do if you only want 33 lines starting from line x. One possible solution is this:
#!/usr/bin/env bash
declare -i s=$1
declare -i e=${s}+32
sed -n "${s},${e}p" $file | while IFS= read -r line; do
echo "$line"
done
(Note that I would probably include some validation of $1 in there as well.)
I declare s and e as integer variables, then even bash can do some simple arithmetic on them and calculate the actual last line to print.

Sorting on Same Line Bash

Hello I am trying to sort a set of numeric command line arguments and then echo them back out in reverse numeric order on the same line with a space between each. I have this loop:
for var in "$#"
do
echo -n "$var "
done | sort -rn
However when I added the -n to the echo the sort command stops working. I am trying to do this without using printf. Using the echo -n they do not sort and simply print in the order they were entered.
You can do it like this:
a=( $# )
b=( $(printf "%s\n" ${a[#]} | sort -rn) )
printf "%s\n" ${b[#]}
# b is reverse sorted nuemrically now
man sort would tell you:
sort - sort lines of text files
So you can transform the result into the desired format after sorting.
In order to achieve the desired result, you can say:
for var in "$#"
do
echo "$var"
done | sort -rn | paste -sd' '
Maybe that's because sort is "line-oriented", so you need every number on a separate line, which is not the case using -n with echo.
You could simply put the sorted numbers back in one line using sed, like that:
for var in "$#";
do
echo "$var ";
done | sort -rn | sed -e ':a;N;s/\n/ /;ba'
sort is used to sort multiple lines of text. Using the option -n of echo, you are printing everything in one line.
If you want the output to be sorted, you have to print it in multiple lines :
for var in "$#"
do
echo $var
done | sort -rn
If you want the result on only one line you could do :
echo $(for var in "$#"; do echo $var; done | sort -rn)
One trick is to play with the IFS:
IFS=$'\n'
set "$*"
IFS=$' \n'
set $(sort -rn <<< "$*")
echo $*
This is the same idea but easier to read with the join() function:
join() {
IFS=$1
shift
echo "$*"
}
join ' ' $(join $'\n' $* | sort -nr)
No loops required:
#!/bin/bash
sorted=( $(sort -rn < <(printf '%s\n' $#)) )
echo ${sorted[#]}
Sorting numbers in single line either comma, or space seperated, use the below
echo "12,12,3,55,567,23,6,9,35,423"|sed -e 's;[ |,];\n;g'|sort -n|xargs|sed -e 's; ;,;g'
if your output does not need comma, skip the sed after xargs

Transpose one line/lines from column to row using shell

I want convert a column of data in a txt file to a row of a csv file using unix commands.
example:
ApplChk1,
ApplChk2,
v_baseLoanAmountTI,
v_plannedClosingDateField,
downPaymentTI,
this is a column which present in a txt file
I want output as follows in a csv file
ApplChk1,ApplChk2,v_baseLoanAmountTI,v_plannedClosingDateField,downPaymentTI,
Please let me know how to do it.
Thanks in advance
If that's a single column, which you want to convert to row, then there are many possibilities:
tr -d '\n' < filename ; echo # option 1 OR
xargs echo -n < filename ; echo # option 2 (This option however, will shrink spaces & eat quotes) OR
while read x; do echo -n "$x" ; done < filename; echo # option 3
Please let us know, how the input would look like, for multi-line case.
A funny pure bash solution (bash ≥ 4.1):
mapfile -t < file.txt; printf '%s' "${MAPFILE[#]}" $'\n'
Done!
for i in `< file.txt` ; do echo -n $i; done; echo ""
gives the output
ApplChk1,ApplChk2,v_baseLoanAmountTI,v_plannedClosingDateField,downPaymentTI,
To send output to a file:
{ for i in `< file.txt` ; do echo -n $i ; done; echo; } > out.csv
When I run it, this is what happens:
[jenny#jennys:tmp]$ more file.txt
ApplChk1,
ApplChk2,
v_baseLoanAmountTI,
v_plannedClosingDateField,
downPaymentTI,
[jenny#jenny:tmp]$ { for i in `< file.txt` ; do echo -n $i ; done; echo; } > out.csv
[jenny#jenny:tmp]$ more out.csv
ApplChk1,ApplChk2,v_baseLoanAmountTI,v_plannedClosingDateField,downPaymentTI,
perl -pe 's/\n//g' your_file
the above will output to stdout.
if you want to do it in place:
perl -pi -e 's/\n//g' your_file
You could use the Linux command sed to replace line \n breaks by commas , or space :
sed -z 's/\n/,/g' test.txt > test.csv
You could also add the -i option if you want to change file in-place :
sed -i -z 's/\n/,/g' test.txt

Bash - How to count C source file functions calls

I want to find for each function defined in a C source file how many times it's called and on which line.
Should I search for patterns which look like function definitions in C and then count how many times that function name occurs. If so, how can I do it? regular expressions?
Any help will be highly appreciated!
#!/bin/bash
if [ -r $1 ]; then
#??????
else
echo The file \"$1\" does NOT exist
fi
The final result is: (please report any bugs)
10 if [ -r $1 ]; then
11 functs=`grep -n -e "\(void\|double\|char\|int\) \w*(.*)" $1 | sed 's/^.*\(void\|double\|int\) \(\w*\)(.*$/\2/g'`
12 for f in $functs;do
13 echo -n $f\(\) is called:
14 grep -n $f $1 > temp.txt
15 echo -n `grep -c -v -e "\(void\|double\|int\) $f(.*)" -e"//" temp.txt`
16 echo " times"
17 echo -n on lines:
18 echo -n `grep -v -e "\(void\|double\|int\) $f(.*)" -e"//" temp.txt | sed -n 's/^\([0-9]*\)[:].*/\1/p'`
19 echo
20 echo
21 done
22 else
23 echo The file \"$1\" does not exist
24 fi
This might sort of work. The first bit finds function definitions like
<datatype> <name>(<stuff>)
and pulls out the <name>. Then grep for that string. There are loads of situations where this won't work, but it might be a good place to start if you're trying to make a simple shell script that works on some programs.
functions=`grep -e "\(void\|double\|int\) \w*(.*)$" -f input.c | sed 's/^.*\(void\|double\|int\) \(\w*\)(.*$/\2/g'`
for func in $functions
do
echo "Counting references for $func:"
grep "$func" -f input.c | wc -l
done
You can try with this regex
(^|[^\w\d])?(functionName(\s)*\()
for example to search all printf occurrences
(^|[^\w\d])?(printf(\s)*\()
to use this expression with grep you have to use the option -E, like this
grep -E "(^|[^\w\d])?(printf(\s)*\()" the_file.txt
Final note, what miss with this solution is to skip the occurrences in comment bloks.

Resources