I have a file like this,
Filesystem State 1024-blocks Used Avail Capacity Mounted on
$ZPMON.DELETEMESTARTED 71686344 58788360 12897984 82% /deleteme
In this file i want to read the 1st column and 5th column without using grep command
i tried this command,but it shows istead of 5th coloumn it shows 6th column output
df -k DELETEME | awk '{print $1 $5 }'
FilesystemAvail
$ZPMON.DELETEMESTARTED82%.
expected output is
Avail
12897984
With single GNU df command:
df -k --output=avail DELETEME
Related
I try to get Available space on mounted disk:
df /tmp/mount/0dfShksftN | tail -1 | awk '{print $4}'
It works fine but not in cases when Filesystem param has folder name with spaces.
I found solution for getting Filesystem and Mount point values in this case:
df -P "/mnt/MOUNT WITH SPACES/path/to/file/filename.txt" | awk 'BEGIN {FS="[ ]*[0-9]+%?[ ]+"}; NR==2 {print $NF}'
But can't find solution for Available field value. I could take the entire string and parse by myself but maybe there is a way to do this using bash,
You can use --output filter of df command.
df "<file-system>" --output=avail
Avail
868215420
For your original approach, you may need to consider counting column backward.
From the man page of df(1):
FIELD_LIST is a comma-separated list of columns to be included. Valid
field names are: 'source', 'fstype', 'itotal', 'iused',
'iavail', 'ipcent',
'size', 'used', 'avail', 'pcent', 'file' and 'target' (see info page).
I am trying to get the CPU usage of a mac over time.
I am using this top cmd in terminal getting the result i want but would like it to output to a file and update every 5 seconds.
top -l 1 | grep -E "^CPU|^Phys"
CPU usage: 3.27% user, 14.75% sys, 81.96% idle
PhysMem: 5807M used (1458M wired), 10G unused.
This command prints all 3 CPU usage percentages tab-separated to a file (appending line by line for each call):
top -l1 | grep -E "CPU usage:" | awk -v FS="CPU usage: | user, | sys, | idle" '{print $2, $3, $4}' >> cpu_user_sys_idle.tsv
Works with pipe separated command chain:
Top as you suggested
Grep to filter only line with CPU usage
Awk with variable field-separator (-v FS) using either of the 4 strings to get all percentages as isolated fields. Then print second, third and fourth (omit first since it is empty).
>> redirects output appending to file (e.g. cpu_user_sys_idle.tsv)
You additionally can put it into automated or scheduled (apple)script to collect measures in regular intervals.
I have one file test.sh. In this my content is look like
Nas /mnt/enjayvol1/backup/test.sh lokesh
thinclient rsync /mnt/enjayvol1/esync/lokesh.sh lokesh
crm rsync -arz --update /mnt/enjayvol1/share/mehul mehul mehul123
I want to retrieve string where it match content /mnt
I want output line
/mnt/enjayvol1/backup/test.sh
/mnt/enjayvol1/esync/lokesh.sh
/mnt/enjayvol1/share/mehul
I have tried
grep -i "/mnt" test.sh | awk -F"mnt" '{print $2}'
but this will not give me accurate output. Please help
Could you please try following awk approach too and let me know if this helps you.
awk -v RS=" " '$0 ~ /\/mnt/' Input_file
Output will be as follows.
/mnt/enjayvol1/backup/test.sh
/mnt/enjayvol1/esync/lokesh.sh
/mnt/enjayvol1/share/mehul
Explanation: Making record separator as space and then checking if any line has /mnt string in it, if yes then not mentioning any action so by default print will happen. So it will print those lines which have /mtn sting in them.
Short grep approach (assuming that /mnt... path doesn't contain whitespaces):
grep -o '\/mnt\/[^[:space:]]*' lokesh.sh
The output:
/mnt/enjayvol1/backup/test.sh
/mnt/enjayvol1/esync/lokesh.sh
/mnt/enjayvol1/share/mehul
I got the temp file with a few lines of files
...
14G /Users/admin/Desktop/xy
1G /Users/admin/Desktop/yz
3G /Users/admin/Desktop/za
18G /Users/admin/Desktop
...
I only want to get one line with the "/Users/admin/Desktop" as output, but don't know how to do it.
You can use grep:
grep "/Users/admin/Desktop$" file
The $ will anchor the regular expression to the end of the line so you don't pick up the lines that contain subdirectories
You can use a minimal awk statement for this like,
awk '$2=="/Users/admin/Desktop"{print $1}' file
18G
(or) the entire line as
awk '$2=="/Users/admin/Desktop"' file
18G /Users/admin/Desktop
I am trying to combine two tab seperated text files but one of the fields is being truncated by awk when I use the command (pls suggest something other than awk if it is easier to do so)
pr -m -t test_v1 test.predict | awk -v OFS='\t' '{print $4,$5,$7}' > out_test8
The format of the test_v1 is
478 192 46 10203853138191712
but I only print 10203853138 for $4 truncating the other digits. Should I use string format?
Actually I found out after a suggestion given that pr -m -t itself does not give the correct output
478^I192^I46^I10203853138^I^I is the output of the command
pr -m -t test_v1 test.predict | cat -vte
I used paste test_v1 test.predict instead of pr and got the right answer.
You problem is use pr -m (merge) here which as per manual:
-m, --merge
print all files in parallel, one in each column, truncate lines, but join lines of full length with -J
You can use:
paste test_v1 test.predict
Run dos2unix on your files first, you've just got control-Ms in your input file(s).