find command using ssh - shell

The below example shows the way how I need the file search and output type which works well in local find.
> find /DBBACKMEUP/ -not -name "localhost*" -type f -name "*2012-10-26*" -exec du -b {} \; | awk '{print $2 "\t" $1}' | awk -F'/' '{print $NF}'
monitor_2012-10-26_22h00m.11.29.135.Friday.sql.gz 119601
test_2012-10-26_22h00m.10.135.Friday.sql.gz 530
status_2012-10-26_22h00m.1.29.135.Friday.sql.gz 944
But I need to print the same command on many servers. So I have planned to exec like this.
>ssh root#192.168.87.80 "find /DBBACKMEUP/ -not -name "localhost*" -type f -name "*2012-10-26*" -exec du -b {} \; | awk '{print $2 "\t" $1}' | awk -F'/' '{print $NF}'"
Ofcourse this gives be a blank output. Any way to parse such a search string in shell and get the output that I desire by ssh?
Thanks!!

Looks like your ssh command there has lots of quotes and double-quotes, which may be the root of your problem (no pun intended). I'd recommend that you create a shell script that will run the find command you desire, them place a copy of it on each server. After that, simply use ssh to execute that shell script instead of trying to pass in a complex command.
Edit:
I think I misunderstood; please correct me if I'm wrong. Are you looking for a way to create a loop that will run the command on a range of IP addresses? If so, here's a recommendation - create a shell script like this:
#!/bin/bash
for ((C=0; C<255; C++)) ; do
for ((D=0; D<255; D++)) ; do
IP="192.168.$C.$D"
ssh root#$IP "find /DBBACKMEUP/ -not -name "localhost*" -type f -name "*2012-10-26*" -exec du -b {} \; | awk '{print "\$"2 \"\\t\" "\$"1}' | awk -F'/' '{print "\$"NF}'"
done
done

Each server?? That must be 749 servers - Your option goes good for hardworkers.. my approach goes good for lazy goose ;) Just a trial did the click ;)
ssh root#192.168.47.203 "find /DBBACKMEUP/ -not -name "localhost*" -type f -name "*2012-10-26*" -exec du -b {} \; | awk '{print "\$"2 \"\\t\" "\$"1}' | awk -F'/' '{print "\$"NF}'"
Tel_Avaya_Log_2012-10-26_22h00m.105.23.Friday.sql.gz 2119
test_2012-10-26_22h00m.10.25.Friday.sql.gz 529
OBD_2012-10-26_22h00m.103.2.203.Friday.sql.gz 914

Related

Unable to run a command that contains single quotes through ssh

I can the following command on a machine (let's call it machine A).
find /foo/bar -name "*" -type f -exec md5sum {} + | awk '{print $1}' | sort
This command lists the md5s of the files under /foo/bar.
However, when I append this command into an ssh command
ssh -i ~/.ssh/my_key my_user#123.123.123.123 'find /foo/bar -name "*" -type f -exec md5sum {} + | awk '{print $1}' | sort'
It generates the following error
awk: cmd. line:1: {print
awk: cmd. line:1: ^ unexpected newline or end of string
123.123.123.123 is the IP address of machine A. The ssh command is run from another machine.
Is it possible to append the command into the ssh command?
I've tried the following command, it doesn't work, either.
ssh -i ~/.ssh/my_key my_user#123.123.123.123 'find /foo/bar -name "*" -type f -exec md5sum {} + | awk \'{print $1}\' | sort'
The single quote after awk terminates the opening single quote before find.
In your case, there is no parameter expansion being done in your command. Hence the easiest would be to use double quotes for the ssh command.
ssh -i ~/.ssh/my_key my_user#123.123.123.123 "find /foo/bar -name '*' -type f -exec md5sum {} + | awk '{print $1}' | sort"
An alternative would be to replace each ' around the awk argument by '"'"', but this is awkward to read.
Single quotes cannot appear (even escaped) inside a single-quoted string. But they can appear (properly escaped) in an ANSI-quoted string.
ssh -i ~/.ssh/my_key my_user#123.123.123.123 \
$'find /foo/bar -name "*" -type f -exec md5sum {} + | awk \'{print $1}\' | sort'

How to use "grep" command to list all the files executable by user in current directory?

my command was this
ls -l|grep "\-[r,-][w,-]x*"|tr -s " " | cut -d" " -f9
but for the result I get all the files, not only the ones for which user has a right to execute ( the first x bit is set on).
I'm running linux ubuntu
You can use find with the -perm option:
find . -maxdepth 1 -type f -perm -u+x
OK -- if you MUST use grep:
ls -l | grep '^[^d]..[sx]' | awk '{ print $9 }'
Don't use grep. If you want to know if a file is executable, use test -x. To check all files in the current directory, use find or a for loop:
for f in *; do test -f "$f" -a -x "$f" && echo "$f"; done
or
find . -maxdepth 1 -type f -exec test -x {} \; -print
Use awk with match
ls -l|awk 'match($1,/^...x/) {print $9}'
match($1,/^...x/): match first field for the regular expression ^...x, ie search for owner permission ending in x.

shell script to display app version number

I am trying to get the application version using the below command
#!/bin/sh
appVersion=$(ssh username#server find '/dir1/dir2/dir3' -type f -name "file.json" -exec grep "version" {} \;| awk -F ': ' '{print $2}' | sed 's/\"//g')
echo $appVersion
Unfortunately am getting the below exception
find: missing argument to `-exec'
Please help me to resolve this issue.
Run the below script, which will behave as expected.
When you run command over ssh, you should use single quote for the multiple commands
#!/bin/sh
appVersion=$(ssh username#server 'find '/dir1/dir2/dir3' -type f -name "file.json" -exec grep "version" {} \;| awk -F ': ' '{print $2}' | sed 's/\"//g'')
echo $appVersion
Refer this link for further details

using pipes with a find command

I have a series of delimited files, some of which have some bad data and can be recognized by doing a column count on them. I can find them with the following command:
find ./ -name 201201*gz -mtime 12
They are all gzipped and I do not want to un-archive them all. So to check the column counts I've been doing I'm running this as a second command on each file:
zcat ./path/to/file.data | awk '{print NF}' | head
I know I can run a command on each file through find with -exec, but how can I also get it to run through the pipes? A couple things I tried, neither of which I expected to work and neither of which did:
find ./ -name 201201*gz -mtime 12 -print -exec zcat {} \; | awk '{print NF}'| head
find ./ -name 201201*gz -mtime 12 -print -exec "zcat {} | awk '{print NF}'| head" \;
I'd use a explicit loop aproach:
find . -name 201201*gz -mtime 12 | while read file; do
echo "$file: "
zcat "$file" | awk '{print NF}' | head
done
More or less you pipe things through find like:
find . -name "foo" -print0 | xargs -0 echo
So your command would look like:
find ./ -name "201201*gz" -mtime 12 -print0 | xargs -0 zcat | awk '{print NF}'| head
-print0 and xargs -0 just helps to make sure files with special characters dont break the pipe.

Use find, wc, and sed to count lines

I was trying to use sed to count all the lines based on a particular extension.
find -name '*.m' -exec wc -l {} \; | sed ...
I was trying to do the following, how would I include sed in this particular line to get the totals.
You may also get the nice formatting from wc with :
wc `find -name '*.m'`
Most of the answers here won't work well for a large number of files. Some will break if the list of file names is too long for a single command line call, others are inefficient because -exec starts a new process for every file. I believe a robust and efficient solution would be:
find . -type f -name "*.m" -print0 | xargs -0 cat | wc -l
Using cat in this way is fine, as its output is piped straight into wc so only a small amount of the files' content is kept in memory at once. If there are too many files for a single invocation of cat, cat will be called multiple times, but all the output will still be piped into a single wc process.
You can cat all files through a single wc instance to get the total number of lines:
find . -name '*.m' -exec cat {} \; | wc -l
On modern GNU platforms wc and find take -print0 and -files0-from parameters that can be combined into a command that count lines in files with total at the end. Example:
find . -name '*.c' -type f -print0 | wc -l --files0-from=-
you could use sed also for counting lines in place of wc:
find . -name '*.m' -exec sed -n '$=' {} \;
where '$=' is a "special variable" that keep the count of lines
EDIT
you could also try something like sloccount
Hm, solution with cat may be problematic if you have many files, especially big ones.
Second solution doesn't give total, just lines per file, as I tested.
I'll prefer something like this:
find . -name '*.m' | xargs wc -l | tail -1
This will do the job fast, no matter how many and how big files you have.
sed is not the proper tool for counting. Use awk instead:
find . -name '*.m' -exec awk '{print NR}' {} +
Using + instead of \; forces find to call awk every N files found (like with xargs).
For big directories we should use:
find . -type f -name '*.m' -exec sed -n '$=' '{}' + 2>/dev/null | awk '{ total+=$1 }END{print total}'
# alternative using awk twice
find . -type f -name '*.m' -exec awk 'END {print NR}' '{}' + 2>/dev/null | awk '{ total+=$1 }END{print total}'

Resources