obtain md5sum on every linked library - bash

I've got an issue where a program suddenly doesn't want to start, no error, no nothing. To ensure the integrity of the code and its linked libraries I wanted to compare the md5sum of every (dynamically) linked library. From other posts in this forum I found it easy to list all the linked libraries show them nicely:
ldd myProgram | grep so | sed -e '/^[^\t]/ d' \
| sed -e 's/\t//' | sed -e 's/.*=..//' \
| sed -e 's/ (0.*)//'
How can I add the md5sum or sha1sum so it will add a column with the checksum next to the filename? Simply adding md5sum only produces one line and doesn't seem to do the job:
ldd myProgram | grep so | sed -e '/^[^\t]/ d' \
| sed -e 's/\t//' | sed -e 's/.*=..//' \
| sed -e 's/ (0.*)//' | md5sum
yields
3baf2fafbce4dc8a313ded067c0fccab -
leaving md5sum out produces the nice list of linked libraries:
/lib/i386-linux-gnu/i686/cmov/libpthread.so.0
/lib/i386-linux-gnu/i686/cmov/librt.so.1
/lib/i386-linux-gnu/i686/cmov/libdl.so.2
/lib/i386-linux-gnu/libz.so.1
/usr/lib/i386-linux-gnu/libodbc.so.1
/usr/lib/libcrypto++.so.9
/lib/i386-linux-gnu/libpng12.so.0
/usr/lib/i386-linux-gnu/libstdc++.so.6
/lib/i386-linux-gnu/i686/cmov/libm.so.6
/lib/i386-linux-gnu/libgcc_s.so.1
/lib/i386-linux-gnu/i686/cmov/libc.so.6
/lib/ld-linux.so.2
/usr/lib/i386-linux-gnu/libltdl.so.7
Any hint is much appreciated!

Your script is doing is piping the literal text "/lib/i386-linux-gnu/i686/cmov/libpthread.so.0..." etc. and calculating the md5sum of that...
You can use xargs to repeat any command on every line of input. The -I{} isn't strictly necessary but I'd recommend as makes your script more readable and easier to understand
For example
adam#brimstone:~$ ldd $(which bash)
| grep so | sed -e '/^[^\t]/ d'
| sed -e 's/\t//' | sed -e 's/.*=..//'
| sed -e 's/ (0.*)//'
| xargs -I{} md5sum {}
6a0cb513f136f5c40332e3882e603a02 /lib/x86_64-linux-gnu/libtinfo.so.5
c60bb4f3ae0157644b993cc3c0d2d11e /lib/x86_64-linux-gnu/libdl.so.2
365459887779aa8a0d3148714d464cc4 /lib/x86_64-linux-gnu/libc.so.6
578a20e00cb67c5041a78a5e9281b70c /lib64/ld-linux-x86-64.so.2

For loop can also be used:
for FILE in `<your command>`;do md5sum $FILE;done
For eg:
for FILE in `ldd /usr/bin/gcc | grep so | sed -e '/^[^\t]/ d' | sed -e 's/\t//' | sed -e 's/.*=..//' | sed -e 's/ (0.*)//'`;do md5sum $FILE;done

Related

Refining bash script with multiple find regex sed awk to array and functions that build a report

The following code is working, but it takes too long and everything I've tried to reduce it bombs either due to white spaces, inconsistent access.log syntax or something else.
Any suggestions to help cut down the finds to one find $LOGS -mtime -30 -type f - print0 and grep/sed/awk/sort once compared to multiple finds like this would be appreciated:
find $LOGS -mtime -30 -type f -print0 | xargs -0 grep -B 2 -w "RESULT err=0 tag=97" | grep -w "BIND" | sed '/uid=/!d;s//&\n/;s/.*\n//;:a;/,/bb;$!{n;ba};:b;s//\n&/;P;D' | sed 's/ //g' | sed s/$/,/g |awk '{a[$1]++}END{for(i in a)print i a[i]}' |sort -t , -k 2 -g > $OUTPUT1;
find $LOGS -mtime -30 -type f -print0 | xargs -0 grep -B 2 -w "RESULT err=0 tag=97" | grep -E 'BIND|LDAP connection from*' | sed '/from /!d;s//&\n/;s/.*\n//;:a;/:/bb;$!{n;ba};:b;s//\n&/;P;D' | sed 's/ //g' | sed s/$/,/g |awk '{a[$1]++}END{for(i in a)print i a[i]}' |sort -t , -k 2 -g > $IPAUTH0;
find $LOGS -mtime -30 -type f -print0 | xargs -0 grep -B 2 -w "RESULT err=49 tag=97" | grep -w "BIND" | sed '/uid=/!d;s//&\n/;s/.*\n//;:a;/,/bb;$!{n;ba};:b;s//\n&/;P;D' | sed 's/ //g' | sed s/$/,/g |awk '{a[$1]++}END{for(i in a)print i a[i]}' |sort -t , -k 2 -g > $OUTPUT2;
I've tried: for find | while read -r file; do grep1>output1 grep2>output2 grep3>output3 done and a few others, but cannot seem to get the syntax right and am hoping to cut down the repeats here.
The full script (stripped of some content) can be found here and runs against a Java program I wrote for an email report. NOTE: This runs against access logs in about 60GB of combined text.
I haven't looked closely at the sed/awk/etc section (and they'll be hard to work on without some example data), but you should be able to share the initial scans by greping for lines matching any of the patterns, storing that in a temp file, and then searching just that for the individual patterns. I'd also use find ... -exec instead of find ... | xargs:
tempfile=$(mktemp "${TMPDIR:-/tmp}/logextract.XXXXXX") || {
echo "Error creating temp file" >&2
exit 1
}
find $LOGS -mtime -30 -type f -exec grep -B 2 -Ew "RESULT err=(0|49) tag=97" {} + >"$tempfile"
grep -B 2 -w "RESULT err=0 tag=97" "$tempfile" | grep -w "BIND" | ...
grep -B 2 -w "RESULT err=0 tag=97" "$tempfile" | grep -E 'BIND|LDAP connection from*' | ...
grep -B 2 -w "RESULT err=49 tag=97" "$tempfile" | grep -w "BIND" | ...
rm "$tempfile"
BTW, you probably don't mean to search for LDAP connection from* -- the from* at the end means "fro" followed by 0 or more "m" characters.
A couple of general scripting recommendations: use lower- or mixed-case variables to avoid accidental conflicts with the various all-caps names that have special meanings. (Except when you want the special meaning, e.g. setting PATH.)
Also, putting double-quotes around variable references is generally a good idea to prevent unexpected word splitting and wildcard expansion... except that in some places your script depends on this, like setting LOGS="/log_dump/ldap/c*", and then counting on wildcard expansion happening when the variable is used. In these cases, it's usually better to use a bash array to store each item (e.g. filename) as a separate element:
logs=(/log_dump/ldap/c*) # Wildcard gets expanded when it's defined
...
find "${logs[#]}" -mtime ... # All that syntax gets all array elements in unmangled form
Note that this isn't really needed in cases like this where you know there aren't going to be any unexpected wildcards or spaces in the variable, but when you're dealing with unconstrained data this method is safer. (I work mostly on macOS, where spaces in filenames are just a fact of life, and I've learned the hard way to use script idioms that aren't confused by them.)

Bash line breaks

I am using Git Bash to recursively find all of the file extensions in our legacy web site. When I pipe it to a file I would like to add line-breaks and a period in front of the file extension.
find . -type f -name "*.*" | grep -o -E "\.[^\.]+$" | grep -o -E "[[:alpha:]]{1,12}" | awk '{print tolower($0)}' | sort -u
You have different ways.
When you do not want to change your existing commands I am tempted to use
printf ".%s\n" $(find . -type f -name "*\.*" | grep -o -E "\.[^\.]+$" |
grep -o -E "[[:alpha:]]{1,12}" | awk '{print tolower($0)}' | sort -u ) # Wrong
This is incorrect. When a file extension has a space (like example.with space), it will be split into different lines.
Your command already outputs everyring into different lines, so you can just put a dot before each line with | sed 's/^/./'
You can skip commands in the pipeline. You can let awk put a dot in front of a line with
find . -type f -name "*\.*" | grep -o -E "\.[^\.]+$" | grep -o -E "[[:alpha:]]{1,12}" | awk '{print "." tolower($0)}' | sort -u
Or you can let sed ad the dot, with GNU sed also convert in lowercase.
find . -type f -name "." | sed -r 's/..([^.])$/.\L\1/' | sort -u
In the last command I skipped the grep on 12 chars, I think it works different than you like:
echo test.qqqwwweeerrrtttyyyuuuiiioooppp | grep -o -E "\.[^\.]+$" | grep -o -E "[[:alpha:]]{1,12}"
Adding a second line break for each line, can be done in different ways.
When you have the awk command, swith the awk and sort and use
awk '{print "." tolower($0) "\n"}'
Or add newlines at the end of the pipeline: sed 's/$/\n/'.

Removing strings from multiple files

I'm trying to organise and rename some roms I got, I've already used command line to remove regions like " (USA)" and " (Japan)" including the space in front from filenames. Now I need to update my .cue files, I've tried the following but something is missing...
grep --include={*.cue} -rnl './' -e " (USA)" | xargs -i# sed -i 's/ (USA)//g' #
grep --include={*.cue} -rnl './' -e " (Europe)" | xargs -i# sed -i 's/ (Europe)//g' #
grep --include={*.cue} -rnl './' -e " (Japan)" | xargs -i# sed -i 's/ (Japan)//g' #
I got it to work on one occasion but can't see to get it right again...
Awesome thanks, I used:
sed -i 's/ (Japan)//g;s/ (Europe)//g;s/ (USA)//g' *.cue

ps -ef | wc -l adds tab to the front of the line

I'm trying to store the output from a ps and later compare it.
I'm using the following line:
siteminder_running=`ps -ef | grep $iplanet_home | grep LLAWP | wc -l`
When I tried to compare the output I found that the variable has a tab in front of the number.
This is the output:
- 0- value
What could be the problem?
Like ruakh pointed out you can just use grep -c
siteminder_running=`ps -ef | grep $iplanet_home | grep -c LLAWP`
For fixing white space in general, I use xargs echo like this:
siteminder_running=`ps -ef | grep $iplanet_home | grep LLAWP | wc -l | xargs echo`
The wc(1) utility provided with most Unix and GNU/Linux releases prints to user terminal input filename and count (of lines/characters/what you asked it to print) separated by TAB.
In case of standard input, there is no input filename, resulting just a TAB in front of count.
There are several ways to circumvent this, for example:
ps -ef | grep $iplanet_home | grep LLAWP | wc -l | awk '{ printf "%d\n", $0 }'
printf $(ps -ef | grep $iplanet_home | grep LLAWP | wc -l)
ps -ef | grep $iplanet_home | grep LLAWP | wc -l | sed -e 's/^[ \t]*//'
As said, these are just examples, there are literally dozens ways of accomplishing this.

Optimize BASH Code to Delete First and Last Line of XML Files

How can this line from a BASH script be optimized to work faster in removing the first and last lines of a directory full of XML files?
sed -s -i -e 1d ./files/to/edit/*.xml && sed -s -i -e '$d' ./files/to/edit/*.xml
The sed command does not have to be used. Any BASH code will work; python3 would also be nice.
Try that :
sed -i '1d;$d' ./files/to/edit/*.xml
It's faster, see :
time find /usr/share/doc/x* | xargs -I% sed '1d' % && sed '$d' %
real 0m0.611s
user 0m0.033s
sys 0m0.120s
time find /usr/share/doc/x* | xargs -I% sed -e '1d' -e '$d' %
real 0m0.613s
user 0m0.027s
sys 0m0.140s
time find /usr/share/doc/x* | xargs -I% sed '1d;$d' %
real 0m0.565s
user 0m0.023s
sys 0m0.140s

Resources