How to check UID exists using ldapsearch - shell

I would like to have a shell script use 'ldapsearch' to compare UIDs listed in a text file with those on a remote LDAP directory.
I'm no shell script expert, and would appreciate any assistance. The following loops through a text file given as an argument, but what I need is to echo when a UID in my text file does not exist in the LDAP.
#!/bin/sh
for i in `cat $1`;
do ldapsearch -x -H ldaps://ldap-66.example.com -b ou=People,dc=crm,dc=example,dc=com uid=$i | grep uid: | awk '{print $2}';
echo $i
done

Try:
#!/bin/sh
url="ldaps://ldap-66.example.com"
basedn="ou=People,dc=crm,dc=example,dc=com"
for i in `cat $1`; do
if ldapsearch -x -H "$url" -b "$basedn" uid=$i uid > /dev/null
then
# Do nothing
true
else
echo $i
fi
done

Anttix, this is the final version that worked. Thanks again. Does this mark the question as accepted?
#!/bin/sh
url="ldaps://ldap-66.example.com"
basedn="ou=People,dc=crm,dc=example,dc=com"
for i in `cat $1`; do
if ldapsearch -x -H "$url" -b "$basedn" uid=$i | grep uid: | awk '{print $2}' > /dev/null
then
# Do nothing
true
else
echo $i
fi
done

#!/bin/sh
uids="${1:?missing expected file argument}"
URL='ldaps://ldap-66.example.com'
BASE='ou=People,dc=crm,dc=example,dc=com'
ldapsearch \
-x -LLL -o ldif-wrap=no -c \
-H "$URL" \
-b "$BASE" \
-f "$uids" \
'(uid=%s)' uid |
sed \
-r -n \
-e 's/^uid: //p' \
-e 's/^uid:: (.+)/echo "\1"|base64 -d/ep' |
grep \
-F -x -v \
-f - \
"$uids"
Explanation
for the ldapsearch part
You can invoke ldapsearch -f just once to perform multiple searches and read them from a file $uids.
The (uid=%s) is used as a template for each line, where the %s is filled in for each line.
The -c is needed to continue even on errors, e.g. when a UID is not found.
The -x enforces simple authentication with should be considered insecure. Better use SASL.
The -LLL will remove un-needed cruft from the ldapsearch output.
The -o ldif-wrap=no will prevent lines longer than 79 characters from being wrapped - otherwise grep may only pick up the first part of your user names.
The uid tells ldapsearch to only return that attribute and skip all the other attributes we're not interested in; saves some network bandwidth and processing time.
for the sed part
The -r enabled extended regular expressions turning +, (…) into operators; otherwise they have to be pre-fixed with a back-slash \.
The -n tells sed to do not print each line by default but only when told to do so using an explicit print command p.
The s/uid: // will strip that prefix from lines containing it.
The s/uid:: …/…/ep part is needed for handling non-ASCII-characters like Umlauts as ldapsearch encodes them using base64.
The suffix …p will print only those lines which were substituted.
for the grep part
The -F tells grep to not use regular expression pattern matching, but plain text; this becomes important if your names contain ., *, parenthesis, ….
The -x enforces whole line matching so searching for foo will not also match foobar.
The -f - tells grep to read the patterns from STDIN, that is the existing UIDs found by ldapsearch.
The -v inverts the search and will filter out those existing users.
The $uids will again read the list of all requested users. After removing the existing users the list of missing users will remain.
Issues with previous solutions
The previous solutions all had certain kinds of issues:
missing grep
missing quoting which breaks as soon as blanks are involved
use-less use of cat
use-less use of grep when awk is also used
inefficient because for each UID many processes get fork()ed
inefficient because for each UID a separate search is performed
did not work for long user names
did not work for user names containing regular expression meta characters
did not work for user names containing Umlauts

Related

Not able to search and replace file using perl command in my shell script

There is a group of file in different paths that i want to parse for a password pattern and change it with a new one.
For example i have a file pass.lst it contains password list
oldpassword1/newpassword1
oldpassword2/newpassword2
oldpassword2/newpassword2
I have one more file that contains file paths list
path.lst:
$Home/test.xml
$Home/demo.sh
etc....
Now i want to make main file that contains a script that reads input from both the file and does my work
Here is the script that i am using
for i in `cat pass.lst`
`do`
for j in `cat path.lst`
do
perl -p -i -e 's/$i/g' $j
done
done
But it's giving me this error.
Substitution replacement not terminated at -e line 1.
Any suggestions and help will be appreciated.
#!/usr/bin/env bash
while IFS=/ read -r orig replacement; do
orig="$orig" replacement="$replacement" \
xargs -d $'\n' \
perl -p -i -e 's/$ENV{orig}/$ENV{replacement}/g' \
<path.lst
done <pass.lst

bash: cURL from a file, increment filename if duplicate exists

I'm trying to curl a list of URLs to aggregate the tabular data on them from a set of 7000+ URLs. The URLs are in a .txt file. My goal was to cURL each line and save them to a local folder after which I would grep and parse out the HTML tables.
Unfortunately, because of the format of the URLs in the file, duplicates exist (example.com/State/City.html. When I ran a short while loop, I got back fewer than 5500 files, so there are at least 1500 dupes in the list. As a result, I tried to grep the "/State/City.html" section of the URL and pipe it to sed to remove the / and substitute a hyphen to use with curl -O. cURL was trying to grab
Here's a sample of what I tried:
while read line
do
FILENAME=$(grep -o -E '\/[A-z]+\/[A-z]+\.htm' | sed 's/^\///' | sed 's/\//-/')
curl $line -o '$FILENAME'
done < source-url-file.txt
It feels like I'm missing something fairly straightforward. I've scanned the man page because I worried I had confused -o and -O which I used to do a lot.
When I run the loop in the terminal, the output is:
Warning: Failed to create the file State-City.htm
I think you dont need multitude seds and grep, just 1 sed should suffice
urls=$(echo -e 'example.com/s1/c1.html\nexample.com/s1/c2.html\nexample.com/s1/c1.html')
for u in $urls
do
FN=$(echo "$u" | sed -E 's/^(.*)\/([^\/]+)\/([^\/]+)$/\2-\3/')
if [[ ! -f "$FN" ]]
then
touch "$FN"
echo "$FN"
fi
done
This script should work and also take care of downloading same files multiple files.
just replace the touch command by your curl one
First: you didn't pass the url info to grep.
Second: try this line instead:
FILENAME=$(echo $line | egrep -o '\/[^\/]+\/[^\/]+\.html' | sed 's/^\///' | sed 's/\//-/')

bash script grep using variable fails to find result that actually does exist

I have a bash script that iterates over a list of links, curl's down an html page per link, greps for a particular string format (syntax is: CVE-####-####), removes the surrounding html tags (this is a consistent format, no special case handling necessary), searches a changelog file for the resulting string ID, and finally does stuff based on whether the string ID was found or not.
The found string ID is set as a variable. The issue is that when grepping for the variable there are no results, even though I positively know there should be for some of the ID's. Here is the relevant portion of the script:
for link in $(cat links.txt); do
curl -s "$link" | grep 'CVE-' | sed 's/<[^>]*>//g' | while read cve; do
echo "$cve"
grep "$cve" ./changelog.txt
done
done
If I hardcode a known ID in the grep command, the script finds the ID and returns things as expected. I've tried many variations of grepping on this variable (e.g. exporting it and doing command expansion, cat'ing the changelog and piping to grep, setting variable directly via command expansion of the curl chain, single and double quotes surrounding variables, half a dozen other things).
Am I missing something nuanced with the outputted variable from the curl | grep | sed chain? When it is echo'd to stdout or >> to a file, things look fine (a single ID with no odd characters or carriage returns etc.).
Any hints or alternate solutions would be much appreciated. Thanks!
FYI:
OSX:$bash --version
GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin14)
Edit:
The html file that I was curl'ing was chock full of carriage returns. Running the script with set -x was helpful because it revealed the true string being grepped: $'CVE-2011-2716\r'.
+ read -r link
+ curl -s http://localhost:8080/link1.html
+ sed -n '/CVE-/s/<[^>]*>//gp'
+ read -r cve
+ grep -q -F $'CVE-2011-2716\r' ./kernelChangelog.txt
Also investigating from another angle, opening the curled file in vim showed ^M and doing a printf %s "$cve" | xxd also showed the carriage return hex code 0d appended to the grep'd variable. Relying on 'echo' stdout was a wrong way of diagnosing things. Writing a simple html page with a valid CVE-####-####, but then adding a carriage return (in vim insert mode just type ctrl-v ctrl-m to insert the carriage return) will create a sample file that fails with the original script snippet above.
This is pretty standard string sanitization stuff that I should have figured out. The solution is to remove carriage returns, piping to tr -d '\r' is one method of doing that. I'm not sure there is a specific duplicate on SO for this series of steps, but in any case here is my now working script:
while read -r link; do
curl -s "$link" | sed -n '/CVE-/s/<[^>]*>//gp' | tr -d '\r' | while read -r cve; do
if grep -q -F "$cve" ./changelog.txt; then
echo "FOUND: $cve";
else
echo "NOT FOUND: $cve";
fi;
done
done < links.txt
HTML files can contain carriage returns at the ends of lines, you need to filter those out.
curl -s "$link" | sed -n '/CVE-/s/<[^>]*>//gp' | tr -d '\r' | while read cve; do
Notice that there's no need to use grep, you can use a regular expression filter in the sed command. (You can also use the tr command in sed to remove characters, but doing this for \r is cumbersome, so I piped to tr instead).
It should look like this:
# First: Care about quoting your variables!
# Use read to read the file line by line
while read -r link ; do
# No grep required. sed can do that.
curl -s "$link" | sed -n '/CVE-/s/<[^>]*>//gp' | while read -r cve; do
echo "$cve"
# grep -F searches for fixed strings instead of patterns
grep -F "$cve" ./changelog.txt
done
done < links.txt

Bash variables not acting as expected

I have a bash script which parses a file line by line, extracts the date using a cut command and then makes a folder using that date. However, it seems like my variables are not being populated properly. Do I have a syntax issue? Any help or direction to external resources is very appreciated.
#!/bin/bash
ls | grep .mp3 | cut -d '.' -f 1 > filestobemoved
cat filestobemoved | while read line
do
varYear= $line | cut -d '_' -f 3
varMonth= $line | cut -d '_' -f 4
varDay= $line | cut -d '_' -f 5
echo $varMonth
mkdir $varMonth'_'$varDay'_'$varYear
cp ./$line'.mp3' ./$varMonth'_'$varDay'_'$varYear/$line'.mp3'
done
You have many errors and non-recommended practices in your code. Try the following:
for f in *.mp3; do
f=${f%%.*}
IFS=_ read _ _ varYear varMonth varDay <<< "$f"
echo $varMonth
mkdir -p "${varMonth}_${varDay}_${varYear}"
cp "$f.mp3" "${varMonth}_${varDay}_${varYear}/$f.mp3"
done
The actual error is that you need to use command substitution. For example, instead of
varYear= $line | cut -d '_' -f 3
you need to use
varYear=$(cut -d '_' -f 3 <<< "$line")
A secondary error there is that $foo | some_command on its own line does not mean that the contents of $foo gets piped to the next command as input, but is rather executed as a command, and the output of the command is passed to the next one.
Some best practices and tips to take into account:
Use a portable shebang line - #!/usr/bin/env bash (disclaimer: That's my answer).
Don't parse ls output.
Avoid useless uses of cat.
Use More Quotes™
Don't use files for temporary storage if you can use pipes. It is literally orders of magnitude faster, and generally makes for simpler code if you want to do it properly.
If you have to use files for temporary storage, put them in the directory created by mktemp -d. Preferably add a trap to remove the temporary directory cleanly.
There's no need for a var prefix in variables.
grep searches for basic regular expressions by default, so .mp3 matches any single character followed by the literal string mp3. If you want to search for a dot, you need to either use grep -F to search for literal strings or escape the regular expression as \.mp3.
You generally want to use read -r (defined by POSIX) to treat backslashes in the input literally.

grepping string from long text

The command below in OSX checks whether an account is disabled (or not).
I'd like to grep the string "isDisabled=X" to create a report of disabled users, but am not sure how to do this since the output is on three lines, and I'm interested in the first 12 characters of line three:
bash-3.2# pwpolicy -u jdoe -getpolicy
Getting policy for jdoe /LDAPv3/127.0.0.1
isDisabled=0 isAdminUser=1 newPasswordRequired=0 usingHistory=0 canModifyPasswordforSelf=1 usingExpirationDate=0 usingHardExpirationDate=0 requiresAlpha=0 requiresNumeric=0 expirationDateGMT=12/31/69 hardExpireDateGMT=12/31/69 maxMinutesUntilChangePassword=0 maxMinutesUntilDisabled=0 maxMinutesOfNonUse=0 maxFailedLoginAttempts=0 minChars=0 maxChars=0 passwordCannotBeName=0 validAfter=01/01/70 requiresMixedCase=0 requiresSymbol=0 notGuessablePattern=0 isSessionKeyAgent=0 isComputerAccount=0 adminClass=0 adminNoChangePasswords=0 adminNoSetPolicies=0 adminNoCreate=0 adminNoDelete=0 adminNoClearState=0 adminNoPromoteAdmins=0
Your ideas/suggestions are most appreciated! Ultimately this will be part of a Bash script. Thanks.
This is how you would use grep to match "isDisabled=X":
grep -o "isDisabled=."
Explanation:
grep: invoke the grep command
-o: Use the --only-matching option for grep (From grep manual: "Print only the matched (non-empty) parts of a matching line, with each such part on a separate output line."
"isDisabled=.": This is the search pattern you give to grep. The . is part of the regular expression, it means "match any character except for newline".
Usage:
This is how you would use it as part of your script:
pwpolicy -u jdoe -getpolicy | grep -oE "isDisabled=."
This is how you can save the result to a variable:
status=$(pwpolicy -u jdoe -getpolicy | grep -oE "isDisabled=.")
If your command was run some time prior, and the results from the command was saved to a file called "results.txt", you use it as input to grep as follows:
grep -o "isDisabled=." results.txt
You can use sed as
cat results.txt | sed -n 's/.*isDisabled=\(.\).*/\1/p'
This will print the value of isDisbaled.

Resources