I have a simple shell script, shown below, and I want to put a line break after each line returned by it.
#!/bin/bash
vcount=`db2 connect to db_lexus > /dev/null; db2 list tablespaces | grep -i "Tablespace ID" | wc -l`
db2pd -d db_lexus -tablespaces | grep -i "Tablespace Statistics" -A $vcount | awk '{printf ($2 $7)}'
The output is:
Statistics:IdFreePgs0537610230083224460850d
and I want the output to be something like that:
Statistics:
Id FreePgs
0 5376
1 0
2 3008
3 224
4 608
5 0
Is that possible to do with shell scripting?
Your problem can be reduced to the following:
$ cat infile
11 12
21 22
$ awk '{ printf ($1 $2) }' infile
11122122
printf is for formatted printing. I'm not even sure if the behaviour of above usage is defined, but it's not how it's meant to be done. Consider:
$ awk '{ printf ("%d %d\n", $1, $2) }' infile
11 12
21 22
"%d %d\n" is an expression that describes how to format the output: "a decimal integer, a space, a decimal integer and a newline", followed by the numbers that go where the %d are. printf is very flexible, see the manual for what it can do.
In this case, we don't really need the power of printf, we can just use print:
$ awk '{ print $1, $2 }' infile
11 12
21 22
This prints the first and second field, separated by a space1 – and print does add a newline without us telling it to.
1More precisely, "separated by the value of the output field separator OFS", which defaults to a space and is printed wherever we use , between two arguments. Forgetting the comma is a popular mistake that leads to no space between the record fields.
It looks like you just want to print columns 2 and 7 of whatever is passed to AWK. Try changing your AWK command to
awk '{print $2, $7}'
This will also add a line break at the end.
I realize you are asking about how to do something in a shell script, but it would certainly be a LOT easier to get this from the database using SQL:
#!/bin/bash
export DB2DBDFT=db_lexus
db2 "select tbsp_id, tbsp_free_pages \
from table(mon_get_tablespace('',-2)) as T \
order by tbsp_id"
Related
I have a text file containing filesize, filedate, filetime, and filepath records. The filepath can contain spaces and can be very long (classical music names). I would like to print the file with filedate, filetime, filesize, and filepath. The first part, without the filepath is easy:
awk '{print $2,$3,$1}' filelist.txt
This works, but it prints the record on two lines:
awk '{print $2,$3,$1,$1=$2=$3=""; print $0}' filelist.txt
I've tried using cut -d' ' -f '2 3 1 4-' , but that doesn't allow rearranging fields. I can fix the two line issue using sed to join. There must be a way to only use awk. In summary, I want to print the 2nd, 3rd, 1st, and from the 4th field to the end. Can anyone help?
Since the print statement in awk always prints a newline in the end (technically ORS, which defaults to a newline), your first print will break the output in two lines.
With printf, on the other hand, you completely control the output with your format string. So, you can print the first three fields with printf (without the newline), then set them to "", and just finish off with the print $0 (which is equivalent to print without arguments):
awk '{ printf("%s %s %s",$2,$3,$1); $1=$2=$3=""; print }' file
I avoid awk when I can. If I understand correctly what you have said -
while read size date time path
do echo "$date $time $size $path"
done < filelist.txt
You could printf instead of echo for more formatting options.
Embedded spaces in $path won't matter since it's the last field.
I have no awk at hand to test but I suppose you may use printf to format a one-line output. Just locate the third space in $0 and take a substring from that position through the end of the input line.
You may also try to swap fields before a standard print, although I'm not sure it will produce desired results...
It always helps to delimit your fields with something like <tab>, so subsequent operations are easier... (I can see you used cut without -d, so maybe your data is already tab delimited.)
echo 1 2 3 very long name |
sed -e 's/ /\t/' -e 's/ /\t/' -e 's/ /\t/' |
awk -v FS='\t' -v OFS='\t' '{print $2, $3, $1, $4}'
The first line generates data. The sed command substitutes first three spaces in each row with \t. Then the awk works flawlessly, outputting tab delimited data again (you need a reasonably new awk).
With GNU awk for gensub():
$ echo '1 2 3 4 5 6' | awk '{print $3, $2, $1, gensub(/([^ ]+){3}/,"",1)}'
3 2 1 4 5 6
With any awk:
$ echo '1 2 3 4 5 6' | awk '{rest=$0; sub(/([^ ]+ ){3}/,"",rest); print $3, $2, $1, rest}'
3 2 1 4 5 6
I have this file content:
2450TO3450
3800
4500TO4560
And I would like to obtain something of this sort:
2450
2454
2458
...
3450
3800
4500
4504
4508
..
4560
Basically I would need a one liner in sed/awk that would read the values on both sides of the TO separator and inject those in a seq command or do the loop on its own and dump it in the same file as a value per line with an arbitrary increment, let's say 4 in the example above.
I know I can use several one temp file, go the read command and sorts, but I would like to do it in a one liner starting with cat filename | etc. as it is already part of a bigger script.
Correctness of the input is guaranteed so always left side of TOis smaller than bigger side of it.
Thanks
Like this:
awk -F'TO' -v inc=4 'NF==1{print $1;next}{for(i=$1;i<=$2;i+=inc)print i}' file
or, if you like starting with cat:
cat file | awk -F'TO' -v inc=4 'NF==1{print $1;next}{for(i=$1;i<=$2;i+=inc)print i}'
Something like this might work:
awk -F TO '{system("seq " $1 " 4 " ($2 ? $2 : $1))}'
This would tell awk to system (execute) the command seq 10 4 10 for lines just containing 10 (which outputs 10), and something like seq 10 4 40 for lines like 10TO40. The output seems to match your example.
Given:
txt="2450TO3450
3800
4500TO4560"
You can do:
echo "$txt" | awk -F TO '{$2<$1 ? t=$1 : t=$2; for(i=$1; i<=t; i++) print i}'
If you want an increment greater than 1:
echo "$txt" | awk -F TO -v p=4 '{$2<$1 ? t=$1 : t=$2; for(i=$1; i<=t; i+=p) print i}'
Give a try to this:
sed 's/TO/ /' file.txt | while read first second; do if [ ! -z "$second" ] ; then seq $first 4 $second; else printf "%s\n" $first; fi; done
sed is used to replace TO with space char.
read is used to read the line, if there are 2 numbers, seq is used to generate the sequence. Otherwise, the uniq number is printed.
This might work for you (GNU sed):
sed -r 's/(.*)TO(.*)/seq \1 4 \2/e' file
This evaluates the RHS of the substitution command if the LHS contains TO.
I might be going about this the wrong way but I have tried every syntax and I am stuck on the closest error I could get to.
I have a log file, in which I want to filter to a set of lines like so:
Files : 1 1 1 1 1
Files : 3 3 4 4 5
Files : 10 4 2 3 1
Files : 254 1 1 1 1
The code I have will get me to this point, however, I want to use awk to perform addition of all of the first numeric column, in this instance giving 268 as the output (then performing a similar task on the other columns).
I have tried to pipe the awk output into a loop to perform the final step, but it won't add the values, throwing an error. I thought it could be due to awk handling the entries as a string, but as bash isn't strongly typed it should not matter?
Anyway, the code is:
x=0;
iconv -f UTF-16 -t UTF-8 "./TestLogs/rbTest.log" | grep "Files :" | grep -v "*.*" | egrep -v "Files : [a-zA-Z]" |awk '{$1=$1}1' OFS="," | awk -F "," '{print $4}' | while read i;
do
$x=$((x+=i));
done
Error message:
-bash: 0=1: command not found
-bash: 1=4: command not found
-bash: 4=14: command not found
-bash: 14=268: command not found
I tried a couple of the different addition syntaxes but I feel this has something to do with what I am trying to feed it than the addition itself.
This is currently just with integer values but I would also be looking to perform it with floats as well.
Any help much appreciated and I am sure there is a less convoluted way to achieve this, still learning.
You can do computations in awk itself:
awk '{for (c=3; c<=NF; c++) sum[c]+=$c} END{printf "Total : ";
for (c=3; c<=NF; c++) printf "%s%s", sum[c], ((c<NF)? OFS:ORS) }' file
Output:
Total : 268 9 8 9 8
Here sum is an associative array that holds sum for each column from #3 onwards.
Command breakup:
for (c=3; c<=NF; c++) # Iterate from 3rd col to last col
sum[c]+=$c # Add each col value into an array sum with index of col #
END # Execute this block after last record
printf "Total : " # Print literal "Total : "
for (c=3; c<=NF; c++) # Iterate from 3rd col to last col
printf "%s%s", # Use printf to format the output as 2 strings (%s%s)
sum[c], # 1st one is sum for the given index
((c<NF)? OFS:ORS) # 2nd is conditional string. It will print OFS if it is not last
# col and will print ORS if it is last col.
(Not an answer, but a formatted comment)
I always get antsy when I see a long pipeline of greps and awks (and seds, etc)
... | grep "Files :" | grep -v "*.*" | egrep -v "Files : [a-zA-Z]" | awk '{$1=$1}1' OFS="," | awk -F "," '{print $4}'
Can be written as
... | awk '/Files : [^[:alpha:]]/ && !/\*/ {print $4}'
Are you using grep -v "*.*" to filter out lines with dots, or lines with asterisks? Because you're achieving the latter.
I have a Problem when trying to awk a READ input in a while loop.
This is my code:
#!/bin/bash
read -p "Please enter the Array LUN ID (ALU) you wish to query, separated by a comma (e.g. 2036,2037,2045): " ARRAY_LUNS
LUN_NUMBER=`echo $ARRAY_LUNS | awk -F "," '{ for (i=1; i<NF; i++) printf $i"\n" ; print $NF }' | wc -w`
echo "you entered $LUN_NUMBER LUN's"
s=0
while [ $s -lt $LUN_NUMBER ];
do
s=$[$s+1]
LUN_ID=`echo $ARRAY_LUNS | awk -F, '{print $'$s'}' | awk -v n1="$s" 'NR==n1'`
echo "NR $s :"
echo "awk -v n1="$s" 'NR==n1'$LUN_ID"
done
No matter what options with awk i try, i dont get it to display more than the first entry before the comma. It looks to me, like the loop has some problems to get the variable s counted upwards. But on the other hand, the code line:
LUN_ID=`echo $ARRAY_LUNS | awk -F, '{print $'$s'}' | awk -v n1="$s" 'NR==n1'`
works just great! Any idea on how to solve this. Another solution to my READ input would be just fine as well.
#!/bin/bash
typeset -a ARRAY_LUNS
IFS=, read -a -p "Please enter the Array LUN ID (ALU) you wish to query, separated by a comma (e.g. 2036,2037,2045): " ARRAY_LUNS
LUN_NUMBER="${#ARRAY_LUNS[#]}"
echo "you entered $LUN_NUMBER LUNs"
for((s=0;s<LUN_NUMBER;s++))
do
echo "LUN id $s: ${ARRAY_LUNS[s]}"
done
Why does your awk code not work?
The problem is not the counter. I said The last awk command in the pipe i.e.
awk -v n1="$s" 'NR==n1'.
This awk code tries to print the first line when s is 1, the second line when s is 2, the third line when s is 3, and so on... But how many lines are printed by echo $ARRAY_LUNS? Just ONE... there is no second line, no third line... just ONE line and just ONE line is printed.
That line contains all LUN_IDs in ONE LINE, i.e, one LUN_ID next to another LUN_ID, like this way:
34 45 21 223
NOT this way
34
45
21
223
Those LUN_IDs are fields printable by awk using $1, $2, $3, ... and so on.
Therefore if you want you code to run fine just remove that last command in the pipe:
LUN_ID=$(echo "$ARRAY_LUNS" | awk -F, '{print $'$s'}')
Please, for any further question, firstly read this awk guide
My command's output is something like:
1540 "A B"
6 "C"
119 "D"
The first column is always a number, followed by a space, then a double-quoted string.
My purpose is to get the second column only, like:
"A B"
"C"
"D"
I intended to use <some_command> | awk '{print $2}' to accomplish this. But the question is, some values in the second column contain space(s), which happens to be the default delimiter for awk to separate the fields. Therefore, the output is messed up:
"A
"C"
"D"
How do I get the second column's value (with paired quotes) cleanly?
Use -F [field separator] to split the lines on "s:
awk -F '"' '{print $2}' your_input_file
or for input from pipe
<some_command> | awk -F '"' '{print $2}'
output:
A B
C
D
If you could use something other than 'awk' , then try this instead
echo '1540 "A B"' | cut -d' ' -f2-
-d is a delimiter, -f is the field to cut and with -f2- we intend to cut the 2nd field until end.
This should work to get a specific column out of the command output "docker images":
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu 16.04 12543ced0f6f 10 months ago 122 MB
ubuntu latest 12543ced0f6f 10 months ago 122 MB
selenium/standalone-firefox-debug 2.53.0 9f3bab6e046f 12 months ago 613 MB
selenium/node-firefox-debug 2.53.0 d82f2ab74db7 12 months ago 613 MB
docker images | awk '{print $3}'
IMAGE
12543ced0f6f
12543ced0f6f
9f3bab6e046f
d82f2ab74db7
This is going to print the third column
Or use sed & regex.
<some_command> | sed 's/^.* \(".*"$\)/\1/'
You don't need awk for that. Using read in Bash shell should be enough, e.g.
some_command | while read c1 c2; do echo $c2; done
or:
while read c1 c2; do echo $c2; done < in.txt
If you have GNU awk this is the solution you want:
$ awk '{print $1}' FPAT='"[^"]+"' file
"A B"
"C"
"D"
awk -F"|" '{gsub(/\"/,"|");print "\""$2"\""}' your_file
#!/usr/bin/python
import sys
col = int(sys.argv[1]) - 1
for line in sys.stdin:
columns = line.split()
try:
print(columns[col])
except IndexError:
# ignore
pass
Then, supposing you name the script as co, say, do something like this to get the sizes of files (the example assumes you're using Linux, but the script itself is OS-independent) :-
ls -lh | co 5