How can we block a particular space for a string in shell using printf command
for example result is
tom#x.com 10
john#x.com 11
andrew#x.com 12
thomas_sean#x.com 15
how can we align this result in proper manner as my command used in coding is
printf $user$i $time
result desired is
tom#x.com 10
john#x.com 11
andrew#x.com 12
thomas_sean#x.com 15
my code is as below-
echo $h | cut -f$a -d" "
`printf "\t${t[$a]}\t\t $hour:$min:$sec\n"`
Possible script
while read -r user time
do
printf "%-20s %s\n" "$user" "$time"
done <<'EOF'
tom#x.com 10
john#x.com 11
andrew#x.com 12
thomas_sean#x.com 15
EOF
Sample output:
tom#x.com 10
john#x.com 11
andrew#x.com 12
thomas_sean#x.com 15
The %-20s can be adjusted a little (%-17s or %-18s) if desired, but the basic idea is to reserve an appropriate number of spaces and left justify the string, followed by a blank and then the 'time'. The \n for the newline is necessary; printf does not add a newline unless you request it to do so.
Pipe your output to column -t:
$ column -t << END
tom#x.com 10
john#x.com 11
andrew#x.com 12
thomas_sean#x.com 15
END
tom#x.com 10
john#x.com 11
andrew#x.com 12
thomas_sean#x.com 15
You can try something like this:
printf("%-10s%-50s", $1, $2)
I didn't test it, but I think it will work.
If not, give a look on this issue, someone had the same problem solved :)
Hope it helps!
Related
I'm would like to substitute a set of edit: single byte characters with a set of literal strings in a stream, without any constraint on the line size.
#!/bin/bash
for (( i = 1; i <= 0x7FFFFFFFFFFFFFFF; i++ ))
do
printf '\a,\b,\t,\v'
done |
chars_to_strings $'\a\b\t\v' '<bell>' '<backspace>' '<horizontal-tab>' '<vertical-tab>'
The expected output would be:
<bell>,<backspace>,<horizontal-tab>,<vertical-tab><bell>,<backspace>,<horizontal-tab>,<vertical-tab><bell>...
I can think of a bash function that would do that, something like:
chars_to_strings() {
local delim buffer
while true
do
delim=''
IFS='' read -r -d '.' -n 4096 buffer && (( ${#buffer} != 4096 )) && delim='.'
if [[ -n "${delim:+_}" ]] || [[ -n "${buffer:+_}" ]]
then
# Do the replacements in "$buffer"
# ...
printf "%s%s" "$buffer" "$delim"
else
break
fi
done
}
But I'm looking for a more efficient way, any thoughts?
Since you seem to be okay with using ANSI C quoting via $'...' strings, then maybe use sed?
sed $'s/\a/<bell>/g; s/\b/<backspace>/g; s/\t/<horizontal-tab>/g; s/\v/<vertical-tab>/g'
Or, via separate commands:
sed -e $'s/\a/<bell>/g' \
-e $'s/\b/<backspace>/g' \
-e $'s/\t/<horizontal-tab>/g' \
-e $'s/\v/<vertical-tab>/g'
Or, using awk, which replaces newline characters too (by customizing the Output Record Separator, i.e., the ORS variable):
$ printf '\a,\b,\t,\v\n' | awk -vORS='<newline>' '
{
gsub(/\a/, "<bell>")
gsub(/\b/, "<backspace>")
gsub(/\t/, "<horizontal-tab>")
gsub(/\v/, "<vertical-tab>")
print $0
}
'
<bell>,<backspace>,<horizontal-tab>,<vertical-tab><newline>
For a simple one-liner with reasonable portability, try Perl.
for (( i = 1; i <= 0x7FFFFFFFFFFFFFFF; i++ ))
do
printf '\a,\b,\t,\v'
done |
perl -pe 's/\a/<bell>/g;
s/\b/<backspace>/g;s/\t/<horizontal-tab>/g;s/\v/<vertical-tab>/g'
Perl internally does some intelligent optimizations so it's not encumbered by lines which are longer than its input buffer or whatever.
Perl by itself is not POSIX, of course; but it can be expected to be installed on any even remotely modern platform (short of perhaps embedded systems etc).
Assuming the overall objective is to provide the ability to process a stream of data in real time without having to wait for a EOL/End-of-buffer occurrence to trigger processing ...
A few items:
continue to use the while/read -n loop to read a chunk of data from the incoming stream and store in buffer variable
push the conversion code into something that's better suited to string manipulation (ie, something other than bash); for sake of discussion we'll choose awk
within the while/read -n loop printf "%s\n" "${buffer}" and pipe the output from the while loop into awk; NOTE: the key item is to introduce an explicit \n into the stream so as to trigger awk processing for each new 'line' of input; OP can decide if this additional \n must be distinguished from a \n occurring in the original stream of data
awk then parses each line of input as per the replacement logic, making sure to append anything leftover to the front of the next line of input (ie, for when the while/read -n breaks an item in the 'middle')
General idea:
chars_to_strings() {
while read -r -n 15 buffer # using '15' for demo purposes otherwise replace with '4096' or whatever OP wants
do
printf "%s\n" "${buffer}"
done | awk '{print NR,FNR,length($0)}' # replace 'print ...' with OP's replacement logic
}
Take for a test drive:
for (( i = 1; i <= 20; i++ ))
do
printf '\a,\b,\t,\v'
sleep 0.1 # add some delay to data being streamed to chars_to_strings()
done | chars_to_strings
1 1 15 # output starts printing right away
2 2 15 # instead of waiting for the 'for'
3 3 15 # loop to complete
4 4 15
5 5 13
6 6 15
7 7 15
8 8 15
9 9 15
A variation on this idea using a named pipe:
mkfifo /tmp/pipeX
sleep infinity > /tmp/pipeX # keep pipe open so awk does not exit
awk '{print NR,FNR,length($0)}' < /tmp/pipeX &
chars_to_strings() {
while read -r -n 15 buffer
do
printf "%s\n" "${buffer}"
done > /tmp/pipeX
}
Take for a test drive:
for (( i = 1; i <= 20; i++ ))
do
printf '\a,\b,\t,\v'
sleep 0.1
done | chars_to_strings
1 1 15 # output starts printing right away
2 2 15 # instead of waiting for the 'for'
3 3 15 # loop to complete
4 4 15
5 5 13
6 6 15
7 7 15
8 8 15
9 9 15
# kill background 'awk' and/or 'sleep infinity' when no longer needed
don't waste FS/OFS - use the built-in variables to take 2 out of the 5 needed :
echo $' \t abc xyz \t \a \n\n ' |
mawk 'gsub(/\7/, "<bell>", $!(NF = NF)) + gsub(/\10/,"<bs>") +\
gsub(/\11/,"<h-tab>")^_' OFS='<v-tab>' FS='\13' ORS='<newline>'
<h-tab> abc xyz <h-tab> <bell> <newline><newline> <newline>
To have NO constraint on the line length you could do something like this with GNU awk:
awk -v RS='.{1,100}' -v ORS= '{
$0 = RT
gsub(foo,bar)
print
}'
That will read and process the input 100 chars at a time no matter which chars are present, whether it has newlines or not, and even if the input was one multi-terabyte line.
Replace gsub(foo,bar) with whatever substitution(s) you have in mind, e.g.:
$ printf '\a,\b,\t,\v' |
awk -v RS='.{1,100}' -v ORS= '{
$0 = RT
gsub(/\a/,"<bell>")
gsub(/\b/,"<backspace>")
gsub(/\t/,"<horizontal-tab>")
gsub(/\v/,"<vertical-tab>")
print
}'
<bell>,<backspace>,<horizontal-tab>,<vertical-tab>
and of course it'd be trivial to pass a list of old and new strings to awk rather than hardcoding them, you'd just have to sanitize any regexp or backreference metachars before calling gsub().
I have file with lots of lines.
12.26.20 11 record[71] dataset[12] message[17] end[19]
12.26.20 11 record[72] dataset[12] message[20] end[21]
12.26.20 11 record[73] dataset[0] message[22] end[30]
12.26.20 11 record[74] dataset[12] message[31] end[33]
I need to get something like that:
11 record 12 17-19
11 record 12 20-21
11 record 12 31-33
How can i do it in terminal?
Awk is a good tool for this:
awk -F[\]\[" "] '$7 == 12 { print $2" "$3" "$7" "$10"-"$13 }' file
Split the lines in the file using a space, [ and ] and then print the relevant delimited fields in the format required when the 7th field is equal to 12.
I have an input as below:
Sep 9 09:22:11
Hello
Hello
Sep 9 10:23:11
Hello
Hello
Hello
Sep 10 11:23:11
I expect the output as below: (the same contiguous lines are replaced by only one line)
Sep 9 09:22:11
Hello
Sep 9 10:23:11
Hello
Sep 10 11:23:11
Could anyone help me solving this one fast using shell or awk ?
Using awk you can do this:
awk '$0 != prev; {prev=$0}' file
Sep 9 09:22:11
Hello
Sep 9 10:23:11
Hello
Sep 10 11:23:11
Command Breakup:
$0 != prev; # if previous line is not same as current then print it
{prev=$0} # store current line in a variable called prev
To remove repeats of lines, use uniq:
uniq File
With your sample input, for example:
$ uniq File
Sep 9 09:22:11
Hello
Sep 9 10:23:11
Hello
Sep 10 11:23:11
Although its name may imply that uniq concerns itself with unique lines, it does not: it looks for adjacent repeated lines and, by default, removes the repeats.
Just because you asked for shell too, though the given answers are all better solutions -
last=''
while read line
do if [[ "$line" -eq "$last" ]]
then continue
else echo "$line"
last="$line"
fi
done < infile
This is simple, clear, and likely slower than either awk or uniq.
I have 2 scripts, #1 and #2. Each work OK by themselves. I want to read a 15 row file, row by row, and process it. Script #2 selects rows. Row 0 is is indicated as firstline=0, lastline=1. Row 14 would be firstline=14, lastline=15. I see good results from echo. I want to do the same with script #1. Can't get my head around nesting correctly. Code below.
#!/bin/bash
# script 1
filename=slash
firstline=0
lastline=1
i=0
exec <${filename}
while read ; do
i=$(( $i + 1 ))
if [ "$i" -ge "${firstline}" ] ; then
if [ "$i" -gt "${lastline}" ] ; then
break
else
echo "${REPLY}" > slash1
fold -w 21 -s slash1 > news1
sleep 5
fi
fi
done
# script2
firstline=(0 1 2 3 4 5 6 7 8 9 10 11 12 13 14)
lastline=(1 2 3 4 5 6 7 8 9 10 11 12 13 14 15)
for ((i=0;i<${#firstline[#]};i++))
do
echo ${firstline[$i]} ${lastline[$i]};
done
Your question is very unclear, but perhaps you are simply looking for some simple function calls:
#!/bin/bash
script_1() {
filename=slash
firstline=$1
lastline=$2
i=0
exec <${filename}
while read ; do
i=$(( $i + 1 ))
if [ "$i" -ge "${firstline}" ] ; then
if [ "$i" -gt "${lastline}" ] ; then
break
else
echo "${REPLY}" > slash1
fold -w 21 -s slash1 > news1
sleep 5
fi
fi
done
}
# script2
firstline=(0 1 2 3 4 5 6 7 8 9 10 11 12 13 14)
lastline=(1 2 3 4 5 6 7 8 9 10 11 12 13 14 15)
for ((i=0;i<${#firstline[#]};i++))
do
script_1 ${firstline[$i]} ${lastline[$i]};
done
Note that reading the file this way is extremely inefficient, and there are undoubtedly better ways to handle this, but I am trying to minimize the changes from your code.
Update: Based on your later comments, the following idiomatic Bash code that uses sed to extract the line of interest in each iteration solves your problem much more simply:
Note:
- If the input file does not change between loop iterations, and the input file is small enough (as it is in the case at hand), it's more efficient to buffer the file contents in a variable up front, as is demonstrated in the original answer below.
- As tripleee points out in a comment: If simply reading the input lines sequentially is sufficient (as opposed to extracting lines by specific line numbers, then a single, simple while read -r line; do ... # fold and output, then sleep ... done < "$filename" is enough.
# Determine the input filename.
filename='slash'
# Count its number of lines.
lineCount=$(wc -l < "$filename")
# Loop over the line numbers of the file.
for (( lineNum = 1; lineNum <= lineCount; ++lineNum )); do
# Use `sed` to extract the line with the line number at hand,
# reformat it, and output to the target file.
fold -w 21 -s <(sed -n "$lineNum {p;q;}" "$filename") > 'news1'
sleep 5
done
A simplified version of what I think you're trying to achieve:
#!/bin/bash
# Split fields by newlines on input,
# and separate array items by newlines on output.
IFS=$'\n'
# Read all input lines up front, into array ${lines[#]}
# In terms of your code, you'd use
# read -d '' -ra lines < "$filename"
read -d '' -ra lines <<<$'line 1\nline 2\nline 3\nline 4\nline 5\nline 6\nline 7\nline 8\nline 9\nline 10\nline 11\nline 12\nline 13\nline 14\nline 15'
# Define the arrays specifying the line ranges to select.
firstline=(0 1 2 3 4 5 6 7 8 9 10 11 12 13 14)
lastline=(1 2 3 4 5 6 7 8 9 10 11 12 13 14 15)
# Loop over the ranges and select a range of lines in each iteration.
for ((i=0; i<${#firstline[#]}; i++)); do
extractedLines="${lines[*]: ${firstline[i]}: 1 + ${lastline[i]} - ${firstline[i]}}"
# Process the extracted lines.
# In terms of your code, the `> slash1` and `fold ...` commands would go here.
echo "$extractedLines"
echo '------'
done
Note:
The name of the array variable filled with read -ra is lines; ${lines[#]} is Bash syntax for returning all array elements as separate words (${lines[*]} also refers to all elements, but with slightly different semantics), and this syntax is used in the comments to illustrate that lines is indeed an array variable (note that if you were to use simply $lines to reference the variable, you'd implicitly get only the item with index 0, which is the same as: ${lines[0]}.
<<<$'line 1\n...' uses a here-string (<<<) to read an ad-hoc sample document (expressed as an ANSI C-quoted string ($'...')) in the interest of making my example code self-contained.
As stated in the comment, you'd read from $filename instead:
read -d '' -ra lines <"$filename"
extractedLines="${lines[*]: ${firstline[i]}: 1 + ${lastline[i]} - ${firstline[i]}}" extracts the lines of interest; ${firstline[i]} references the current element (index i) from array ${firstline[#]}; since the last token in Bash's array-slicing syntax
(${lines[*]: <startIndex>: <elementCount>}) is the count of elements to return, we must perform a calculation to determine the count, which is what 1 + ${lastline[i]} - ${firstline[i]} does.
By virtue of using "${lines[*]...}" rather than "${lines[#]...}", the extracted array elements are joined by the first character in $IFS, which in our case is a newline ($'\n') (when extracting a single line, that doesn't really matter).
I am trying to create a script in order to break down a file into 24. The "infoband.dat" contains the data of 24 bands that i want to plot, but rather than writing each band separately, it first writes all the 1st points of each band, then all the 2nd points, etc.
My script was supposed to begin reading each line of the file, while count to 24 over and over until the end of the file. On the first iteration of the for loop, it would create a file with all the first lines out of those 24 line chunks, and it does it successfully. But the second iteration doesn't even start. What is breaking the loop?
1 #!/bin/sh
2 grep frequency band.yaml > infoband.dat
3 contadora=0
4 for i in {1..24} #loop to create the band file, 24 is the no. of bands
5 do
6 contadora=$((contadora+1))
7 contadorb=0
8 contadorc=0
9 while read line
10 do
11 contadorb=$((contadorb+1))
12 if [ $contadorb -eq 25 ]
13 then
14 contadorb=1
15 contadorc=$((contadorc+1))
16 fi
19 if [ $contadora -eq $contadorb ]
20 then
21 echo $contadora $contadorb $contadorc "$line" >> band_$contadora.dat
22 fi
23 done < infoband.dat
24 echo "file of the band " $contadora "is finished"
25 done
Update: i got the code done using a different approach (the variable contadorc is useless btw):
1 #!/bin/sh
2 grep frequency band.yaml > infoband.dat
3 nband=24
4 contadorb=0
5 contadorc=0
6 while read line
7 do
8 contadorb=$((contadorb+1))
9 if [ $contadorb -eq $((nband+1)) ]
10 then
11 contadorb=1
12 contadorc=$((contadorc+1))
13 fi
14 echo " "$contadorb" "$line" punto_q $contadorc">> test_infoband.dat
15 done < infoband.dat
16 for i in `seq 1 $nband`
17 do
18 echo $i $nband
19 grep " $i " test_infoband.dat > banda_$i.dat
20 done
/bin/sh doesn't do brace expansion, so your loop only has one iteration, in which i is set to the string {1..24}. Either change the hashtag to /bin/bash and/or run the script with bash, or use
for i in $(seq 1 24)
(assuming your system has the seq command, otherwise you may need to just hard-code the list, or use a while loop to explicitly increment and test the value of i).
Did you try using the command "split"?
split -l 24 infoband.dat