Roundup after decimal in awk in solaris os - shell

I want total size of file in a folder using certain file starting with name like abc_1_* in sun solaris os because here i cannot use du -ch, current i am using find command i am getting required output but i want round up output after decimal
Current code :-
echo `find $DUMPDIR -name "${DUMPFILE}*" -exec ls -ltr {} \; | awk ' {s+=$5} END {print s/1024/1024/1024}'`
output:-
1.768932
Desired output:-
1.7G
Kindly help me with this i am new to solaris

find $DUMPDIR -name "${DUMPFILE}*" -exec ls -lh {} \; | awk '{print $5}'
You can use GNU extension -h option for ls to print sizes in human readable format. Note: as suggested by #Andrew Henle, -h is not guaranteed to be supported in ls.
Or simply use,
ls -lh "$DUMPDIR/${DUMPFILE}*" | cut -d' ' -f 5

You can round up a float in awk with
awk 'BEGIN {fl=1.768932; printf("%.1f G\n", fl)}'

Related

creating a list of files with file absolute path in linux

I have a large sum of files (~50000 files).
ls /home/abc/def/
file1.txt
file2.txt
file3.txt
.........
.........
file50000.txt
I want to create a CSV file with two columns: first column provides the filename and the second provides the absolute file path as:
output.csv
file1.txt,/home/abc/def/file1.txt
file2.txt,/home/abc/def/file2.txt
file3.txt,/home/abc/def/file3.txt
.........................
.........................
file50000.txt,/home/abc/def/file50000.txt
How to do this with bash commands. I tried with ls and find as
find /home/abc/def/ -type f -exec ls -ld {} \; | awk '{ print $5, $9 }' > output.csv
but this gives me absolute paths. How to get the output as shown in output.csv above
You can get both just the filename and the full path with GNU find's -printf option:
find /home/abc/def -type f -printf "%f,%p\n"
Pipe through sort if you want sorted results.
How about:
$ find /path/ | awk -F/ -v OFS=, '{print $NF,$0}'
Add proper switches to find where needed.
if u wanna fully canonicalize all existing paths, including fixing duplicate / and resolving symlinks out to their physical why not just
find … -print0 |
or
gls --zero |
or
mawk 8 ORS='\0' filelist.txt |
xargs -0 -P 8 grealpath -ePq
In plain bash:
for file in /home/abc/def/*.txt; do printf '%s,%s\n' "${file##*/}" "$file"; done
or,
dir=/home/abc/def
cd "$dir" && for file in *.txt; do printf '%s,%s\n' "$file" "$dir/$file"; done

How to use "grep" command to list all the files executable by user in current directory?

my command was this
ls -l|grep "\-[r,-][w,-]x*"|tr -s " " | cut -d" " -f9
but for the result I get all the files, not only the ones for which user has a right to execute ( the first x bit is set on).
I'm running linux ubuntu
You can use find with the -perm option:
find . -maxdepth 1 -type f -perm -u+x
OK -- if you MUST use grep:
ls -l | grep '^[^d]..[sx]' | awk '{ print $9 }'
Don't use grep. If you want to know if a file is executable, use test -x. To check all files in the current directory, use find or a for loop:
for f in *; do test -f "$f" -a -x "$f" && echo "$f"; done
or
find . -maxdepth 1 -type f -exec test -x {} \; -print
Use awk with match
ls -l|awk 'match($1,/^...x/) {print $9}'
match($1,/^...x/): match first field for the regular expression ^...x, ie search for owner permission ending in x.

How do I rm file from shell script whose name starts from TRAN followed by datetime stamp

I need to remove a file from shell script whose name starts with TRAN followed by date time stamp.
ls -Al TRAN* | awk '{print $9}'
gives me the name of file on command line.
However, I cant seem to store it in a variable.
name=$(ls -Al TRAN* | awk '{print $9}')
on executing:
syntax error at line 32: `name=$' unexpected
Please advise
Processing the output of ls is frowned upon because it is quite fragile. Use find instead:
find -maxdepth 1 -type f -name 'TRAN.*' -delete
name=ls -l | grep 'TRAN' | awk '{print $9}';
This assignment worked
Thanks everyone!

find command using ssh

The below example shows the way how I need the file search and output type which works well in local find.
> find /DBBACKMEUP/ -not -name "localhost*" -type f -name "*2012-10-26*" -exec du -b {} \; | awk '{print $2 "\t" $1}' | awk -F'/' '{print $NF}'
monitor_2012-10-26_22h00m.11.29.135.Friday.sql.gz 119601
test_2012-10-26_22h00m.10.135.Friday.sql.gz 530
status_2012-10-26_22h00m.1.29.135.Friday.sql.gz 944
But I need to print the same command on many servers. So I have planned to exec like this.
>ssh root#192.168.87.80 "find /DBBACKMEUP/ -not -name "localhost*" -type f -name "*2012-10-26*" -exec du -b {} \; | awk '{print $2 "\t" $1}' | awk -F'/' '{print $NF}'"
Ofcourse this gives be a blank output. Any way to parse such a search string in shell and get the output that I desire by ssh?
Thanks!!
Looks like your ssh command there has lots of quotes and double-quotes, which may be the root of your problem (no pun intended). I'd recommend that you create a shell script that will run the find command you desire, them place a copy of it on each server. After that, simply use ssh to execute that shell script instead of trying to pass in a complex command.
Edit:
I think I misunderstood; please correct me if I'm wrong. Are you looking for a way to create a loop that will run the command on a range of IP addresses? If so, here's a recommendation - create a shell script like this:
#!/bin/bash
for ((C=0; C<255; C++)) ; do
for ((D=0; D<255; D++)) ; do
IP="192.168.$C.$D"
ssh root#$IP "find /DBBACKMEUP/ -not -name "localhost*" -type f -name "*2012-10-26*" -exec du -b {} \; | awk '{print "\$"2 \"\\t\" "\$"1}' | awk -F'/' '{print "\$"NF}'"
done
done
Each server?? That must be 749 servers - Your option goes good for hardworkers.. my approach goes good for lazy goose ;) Just a trial did the click ;)
ssh root#192.168.47.203 "find /DBBACKMEUP/ -not -name "localhost*" -type f -name "*2012-10-26*" -exec du -b {} \; | awk '{print "\$"2 \"\\t\" "\$"1}' | awk -F'/' '{print "\$"NF}'"
Tel_Avaya_Log_2012-10-26_22h00m.105.23.Friday.sql.gz 2119
test_2012-10-26_22h00m.10.25.Friday.sql.gz 529
OBD_2012-10-26_22h00m.103.2.203.Friday.sql.gz 914

How to do an ls command on output from an awk field

This takes a directory as a parameter:
#!/bin/bash
ls -l $1 |awk '$3!=$4{print $9}'
Now what I need is to be able to do ANOTHER ls -l on the just the files that are found from the awk statement.
Yeah, it sounds dumb, and I know of like 3 other ways to do this, but not with awk.
Use awk system command:
ls -l $1 |awk '$3!=$4{system("ls -l " $9)}'
The command to use is xargs.
man xargs
should give some clues.
#!/bin/bash
ls -l $1 | awk '$3!=$4{ system( "ls -l '$1'/" $9}'
If it is allowed to adjust the first ls -l you could try to do:
ls -ld "$1"/* |awk '$3!=$4{print $9}' | xargs ls -l
In this case ls will prefix the directory. But I don't know if this is portable behavior.

Resources