bash ldap search - variable as filter - bash

I am arguing with something i expected to be simple....
I want to lookup a users manager from ldap, then get the managers email and sam name.
I expected to be able to get the cn for the manager from ldap like this:
manager=$(/usr/bin/ldapsearch -LLL -H ldap://company.ads -x -D admin#company.ads -w password -b ou=employees,dc=company,dc=ads sAMAccountName=employee1 | grep "manager:" | awk '{gsub("manager: ", "");print}' | awk 'BEGIN {FS=","}; {print $1, $2 }' )
that gives me the cn like this:
CN=manager,\ Surname
Now when I run another query like this:
/usr/bin/ldapsearch -LLL -H ldap://company.ads -x -D admin#company.ads -w password -b ou=employees,dc=company,dc=ads $manager
I get bad search filter (-7) echo the command copy, paste run it i get the record back....
Ive tried a number of variations on this, can anyone see what im missing?
Thanks.

Since there's a space in $manager, you need to quote it to prevent it from being split into multiple arguments.
/usr/bin/ldapsearch -LLL -H ldap://company.ads -x -D admin#company.ads -w password -b ou=employees,dc=company,dc=ads "$manager"
In general, it's best to always quote your variables, unless you specifically want it to be split into words.
You also need to remove the backslash \ from the LDAP entry. Backslashes are for escaping literal spaces in scripts, they shouldn't be used in data, because they're not processed when expanding variables.

Related

Converting csv format column to comma separated list using bash

I need to convert a column which is in CSV format to a comma-separated list so that I can use a for loop on the list and use each parameter.
Here what I have tried:
$ gcloud dns managed-zones list --format='csv(name)' --project 'sandbox-001'
Output:
name
dns1
dns2
dns3
dns4
I need such result: "dns1,dns2,dns3,dns4" so that I can use a for loop:
x="dns1, dns2, dns3, dns4"
for i in $x:
print $i
done
The paste command returns me the last line:
$ gcloud dns managed-zones list --format='csv(name)' --project 'sandbox-001' |paste -sd,
,dns4
I would appreciate if someone can help me with this.
The real problem is apparently that the output has DOS line feeds. See Are shell scripts sensitive to encoding and line endings? for a broader discussion, but for the immediate solution, try
tr -s '\015\012' , <file | sed 's/^[,]*,//;s/,$/\n/'
The arguments to for should just be a list of tokens anyway, no commas between them. However, a better solution altogether is to use while read instead. See also Don't read lines with for
gcloud dns managed-zones list --format='csv(name)' \
--project 'sandbox-001' |
tr -d '\015' |
tail -n +2 | # skip header line
while read -r i; do
echo "$i" # notice also quoting
done
I don't have access to gcloud but its manual page mentions several other formats which might be more suitable for your needs, though. See if the json or list format might be easier to manipulate. (CSV with a single column is not really CSV anyway, just a text file.)
Create an array:
arr=( $(gcloud dns managed-zones list --format='csv(name)' --project 'sandbox-001') )
Than printf it like so:
printf -v var '%s,' "${arr[#]:1}"
This will create variable $var with value 'dns1,dns2,dns3,' echo it like this to drop last comma:
echo "${var%,}"

Loop to read values from two files and use them as variables in a curl in shell

I need to create a loop so I can use values listed in two text files as variables into a curl command.
For example, let say there is a list named destinations.txt that looks like this:
facebook.com
pinterest.com
instagram.com
And there is another file named keys.txt which includes API keys to make calls to each destination, this file looks like:
abcdefghij-123
mnopqrstuv-456
qwertyuiop-789
The idea of this loop is to pull this data so I can run 3 curls each using the data coming from the line they are. This is an example considering that $destination and $key are the values pulled from the txt files.
curl -k 'https://'"$destination"'//api/?type=op&cmd=asdasdf='"$key"
These would be the expected results:
1st round:
curl -k https://facebook.com//api/?type=op&cmd=asdasdf=abcdefghij-123
2nd round:
curl -k https://pinterest.com//api/?type=op&cmd=asdasdf=mnopqrstuv-456
3rd round:
curl -k https://instagram.com//api/?type=op&cmd=asdasdf=qwertyuiop-789
I've tried multiple times with nested while/for and paste, however, the results are not as expected since data is duplicated.
You can combine the two files using the paste command and awk command and the call it iteratively using for loop:
paste "destinations.txt" "keys.txt" > combined.txt
awk '{ printf "https://%s//api/?type=op&cmd=asdasdf=%s\n", $1, $2 }' combined.txt > allCurls.txt
for crl in `more +1 allCurls.txt`
do
echo `curl -k $crl`
done
If you want to redirect the output of the curl to some file:
count=1
for crl in `more +1 allCurls.txt`
do
# use the awk with the variable count,
#-v count=$count uses the shell variable in awk
filename=`awk -v count=$count 'NR==count{print $1}' destinations.txt`
filename="${filename}_curl_result.txt "
#redirect the curl output to filename
curl -k $crl > ${filename}
count=$((count+1))
done
Try with something like that :
while read -u 4 destination && read -u 5 key; do
curl -k https://${destination}/blablabal/${key}
done 4<destination.txt 5<key.txt
As long as those 2 files are in sync it should work

using curl to call data, and grep to scrub output

I am attempting to call an API for a series of ID's, and then leverage those ID's in a bash script using curl, to query a machine for some information, and then scrub the data for only a select few things before it outputs this.
#!/bin/bash
url="http://<myserver:myport>/ws/v1/history/mapreduce/jobs"
for a in $(cat jobs.txt); do
content="$(curl "$url/$a/counters" "| grep -oP '(FILE_BYTES_READ[^:]+:\d+)|FILE_BYTES_WRITTEN[^:]+:\d+|GC_TIME_MILLIS[^:]+:\d+|CPU_MILLISECONDS[^:]+:\d+|PHYSICAL_MEMORY_BYTES[^:]+:\d+|COMMITTED_HEAP_BYTES[^:]+:\d+'" )"
echo "$content" >> output.txt
done
This is for a MapR project I am currently working on to peel some fields out of the API.
In the example above, I only care about 6 fields, though the output that comes from the curl command gives me about 30 fields and their values, many of which are irrelevant.
If I use the curl command in a standard prompt, I get the fields I am looking for, but when I add it to the script I get nothing.
Please remove quotes after
$url/$a/counters" ". Like following:
content="$(curl "$url/$a/counters | grep -oP '(FILE_BYTES_READ[^:]+:\d+)|FILE_BYTES_WRITTEN[^:]+:\d+|GC_TIME_MILLIS[^:]+:\d+|CPU_MILLISECONDS[^:]+:\d+|PHYSICAL_MEMORY_BYTES[^:]+:\d+|COMMITTED_HEAP_BYTES[^:]+:\d+'" )"

Command-line access to OS X's keychain - How to get e-mail address associated with account

I'm writing a Bash command-line tool that accesses the keychain for emulating a web browser to communicate with web pages.
It's quite straightforward to get the password stored in the keychain from there:
PASSWORD=`security find-internet-password -gs accounts.google.com -w`
But it's a bit more tricky to extract the email address, as the most specific command you get for this returns a lot of information:
$security find-internet-password -gs accounts.google.com
/Users/me/Library/Keychains/login.keychain"
class: "inet"
attributes:
0x00000007 <blob>="accounts.google.com"
0x00000008 <blob>=<NULL>
"acct"<blob>="my-email-address#gmail.com"
"atyp"<blob>="form"
"cdat"<timedate>=0x32303135303333303134333533315A00 "20150330143531Z\000"
"crtr"<uint32>="rimZ"
"cusi"<sint32>=<NULL>
"desc"<blob>=<NULL>
"icmt"<blob>=<NULL>
"invi"<sint32>=<NULL>
"mdat"<timedate>=0x32303135303333303134333533315A00 "20150330143531Z\000"
"nega"<sint32>=<NULL>
"path"<blob>="/ServiceLogin"
"port"<uint32>=0x00000000
"prot"<blob>=<NULL>
"ptcl"<uint32>="htps"
"scrp"<sint32>=<NULL>
"sdmn"<blob>=<NULL>
"srvr"<blob>="accounts.google.com"
"type"<uint32>=<NULL>
password: "my-password"
How would you extract the account e-mail address from the line starting with "acct"<blob>= and store it, say, to a variable called EMAIL?
If you're using multiple grep, cut, sed, and awk statements, you can usually replace them with a single awk.
PASSWORD=$(security find-internet-password -gs accounts.google.com -w)
EMAIL=$(awk -F\" '/acct"<blob>/ {print $4}'<<<$PASSWORD)
This may be easier on a single line, but I couldn't get the security command to print out an output like yours in order to test it. Plus, it's a bit long to show on StackOverflow:
EMAIL=$(security find-internet-password -gs accounts.google.com -w | awk -F\" '/acct"<blob>/ {print $4}')
The /acct"<blob>/ is a regular expression. This particular awk command line will filter out lines that match this regular expression. The -F\" divides the output by the field given. In your line, the fields become:
The spaces in the front of the line.
acct
<blob>
my-email-address#gmail.com
A null field
The {print $4} says to print out the fourth field.
By the way, it's usually better to use $(....) instead of back ticks in your Shell scripts. The $( ... ) are easier to see, and you can enclose subcommands to execute before your main command:
foo=$(ls $(find . -name "*.txt"))
EMAIL=`security find-internet-password -s accounts.google.com | grep acct | cut -d "=" -f 2`
EMAIL="${EMAIL:1:${#EMAIL}-2}" # remove the brackets
Explanation:
grep acct keeps only the line containing the string "acct"
cut -d "=" -f 2 parses that line based on the separator "=" and keeps the 2nd part, i.e. the part after the "=" sign, which is the e-mail address enclosed within brackets
EMAIL="${EMAIL:1:${#EMAIL}-2}" removes the first and last characters of that string, leaving us with the clean e-mail address we were looking for

reverse geocoding in bash

I have a gps unit which extracts longitude and latitude and outputs as a google maps link
http://maps.googleapis.com/maps/api/geocode/xml?latlng=51.601154,-0.404765&sensor=false
From this i'd like to call it via curl and display the "short name" in line 20
"short_name" : "Northwood",
so i'd just like to be left with
Northwood
so something like
curl -s http://maps.googleapis.com/maps/api/geocode/xml?latlng=latlng=51.601154,-0.404765&sensor=false sed sort_name
Mmmm, this is kind of quick and dirty:
curl -s "http://maps.googleapis.com/maps/api/geocode/json?latlng=40.714224,-73.961452&sensor=false" | grep -B 1 "route" | awk -F'"' '/short_name/ {print $4}'
Bedford Avenue
It looks for the line before the line with "route" in it, then the word "short_name" and then prints the 4th field as detected by using " as the field separator. Really you should use a JSON parser though!
Notes:
This doesn't require you to install anything.
I look for the word "route" in the JSON because you seem to want the road name - you could equally look for anything else you choose.
This isn't a very robust solution as Google may not always give you a route, but I guess other programs/solutions won't work then either!
You can play with my solution by successively removing parts from the right hand end of the pipeline to see what each phase produces.
EDITED
Mmm, you have changed from JSON to XML, I see... well, this parses out what you want, but I note you are now looking for a locality whereas before you were looking for a route or road name? Which do you want?
curl -s "http://maps.googleapis.com/maps/api/geocode/xml?latlng=51.601154,-0.404765&sensor=false" | grep -B1 locality | grep short_name| head -1|sed -e 's/<\/.*//' -e 's/.*>//'
The "grep -B1" looks for the line before the line containing "locality". The "grep short_name" then gets the locality's short name. The "head -1" discards all but the first locality if there are more than one. The "sed" stuff removes the <> XML delimiters.
This isn't text, it's structured JSON. You don't want the value after the colon on line 12, you want the value of short name in the address_component with type 'route' from the result.
You could do this with jsawk or python, but it's easier to get it from XML output with xmlstarlet, which is lighter than python and more available than jsawk. Install xmlstarlet and try:
curl -s 'http://maps.googleapis.com/maps/api/geocode/xml?latlng=40.714224,-73.961452&sensor=false' \
| xmlstarlet sel -t -v '/GeocodeResponse/result/address_component[type="route"]/short_name'
This is much more robust than trying to parse JSON as plaintext.
The following seems to work assuming you always like the short_name at line 12:
curl -s 'http://maps.googleapis.com/maps/api/geocode/json?latlng=40.714224,-73.961452&sensor=false' | sed -n -e '12s/^.*: "\([a-zA-Z ]*\)",/\1/p'
or if you are using the xml api and wan't to trap the short_name on line 20:
curl -s 'http://maps.googleapis.com/maps/api/geocode/xml?latlng=51.601154,-0.404765&sensor=false' | sed -n -e '19s/<short_name>\([a-zA-Z ]*\)<\/short_name>/\1/p'

Resources