I have text file, on each line there is ldap attribute taken from my LDAP, the attribute is 'uid', it looks like first and second name (example -> john.carter), so what I want to do is to write a bash script which will compare the uid's from the text file to the uid's from specific group of people in ldap and if there is a match I want to print another attribute from that group. My problem is with the comparison, i.e how to match the uid's from the text file to those from the Ldap. Any suggestions?
This is my ldapsearch:
ldapsearch -H ldap://server -D "uid=name,ou=group,dc=some,dc=domain,dc=com" -w "example" -b "o=organizationName,ou=group,dc=some,dc=domain,dc=com"
I suppose that it should be done with 'while - do,done' cycle, so the question is how to check if the uid's from:'o=organizationName,ou=group,dc=some,dc=domain,dc=com' are matched with the uid's from the text file, where in the text file they are sorted one uid on line:
test.test
test1.test1 ...
Depending on how many lines there are in the file you could collect them into an or statement like (|(uid=john.carter)(uid=jane.carter)(uid=... and then add the rest of your criteria. The result would look something like this:
(&(|(uid=john.carter)(uid=jane.carter)(uid=john.doe))(memberOf=o=organizationName,ou=group,dc=some,dc=domain,dc=com))
If there are too many then you could create a query for each line like this:
(&(uid=john.carter)(memberOf=o=organizationName,ou=group,dc=some,dc=domain,dc=com))
(&(uid=jane.carter)(memberOf=o=organizationName,ou=group,dc=some,dc=domain,dc=com))
(&(uid=john.doe)(memberOf=o=organizationName,ou=group,dc=some,dc=domain,dc=com))
Then you would need to run each query specifying the fields you want to be returned.
Related
Working on a bash script. I'm reading a line from a properties file using grep and cut and have fetched some value in a variable role_portions something like this role_portions=role_1:10,role_2:25,role_3:75,role_4:50,role_5:75,role_6:25,role_7:50
Now, I get a few roles as csv input parameter in my bash script and I would want to change those roles values to 0.
For example, when I run modify_script.sh role_2,role_4,role_7, after reading the above value from the file, the script should provide as output role_1:10,role_2:0,role_3:75,role_4:0,role_5:75,role_6:25,role_7:0. Can someone help with this?
When the role names are without special characters (like & and /) you can use sed.
for role in role_2 role_4 role_7; do
role_portions=$(sed -r "s/(^|,)(${role}):[^,]*/\1\2:0/" <<< "${role_portions}")
done
When you are already using grep and cut you might be able to combine commands (maybe use awk).
I have a folder where there are 50 excel sheets in CSV format. I have to populate a particular value say "XYZ" in the column I of all the sheets in that folder.
I am new to unix and have looked for a couple of pages Here and Here . Can anyone please provide me the sample script to begin with?
For example :
Let's say column C in this case:
A B C
ASFD 2535
BDFG 64486
DFGC 336846
I want to update column C to value "XYZ".
Thanks.
I would export those files into csv format
- with semikolon as field separator
- eventually by leaving out column descriptions (otherwise see comment below)
Then the following combination of SHELL and SED script could more or less do already the trick
#! /bin/sh
for i in *.csv
do
sed -i -e "s/$/;XZY/" $i
done
-i means to edit the file in place, here you could append the value to all lines
-e specifies the regular expresssion for substitution
You might want to use a similar script like this, to rename "XYZ" to "C" only in the 1st line if the csv files should contain also column descriptions.
Basically I need to execute a curl command multiple times and redirect the output to a .csv file, each time the command is executed a term that is used in two separate places in the command is changed. I do have a list of these terms (arguments?) contained in a separate text file. Each time the command runs for a different term, the output needs to be appended to the file.
The command is basically:
curl "http://someURL/standardconditions+AND+(TERM_exact+OR+TERM_related)" > testfile.csv
So each time the command is run, TERM changes in both places (TERM_exact and TERM_related). As I mentioned, I have a text file that has a list of all 60 or so terms, what I want is the script to execute the command using the first term on the list, write the output to the specified .csv file and then repeat with the second term on the list, append that to the file and so on and so forth until it's been run for every single term.
I imagine there is a simple way to do this, I'm just not sure how.
Here's one way to do it,
This assumes that listFile.csv is your list of 60 items, and that each line is a comma-separated pair of values (no commas allowed in values!)
while IFS=, read exact related ; do
curl "http://someURL/standardconditions+AND+(TERM_${exact}+OR+TERM_${related})" >> testfile.csv
done < listFile.csv
It's not clear if you wanted one output file, or multiple.
You could replace the >> testfile.csv with >>testfile.${exact}_${related}.csv
to have separate files.
IHTH
You can set a variable to store TERM, and use concat function to get a string like
"http://someURL/standardconditions+AND+(TERM_exact+OR+TERM_related)", and run a python(may be other language)script including loop structure to handle 60 terms.
I have a file, in AIX server, with multiple record entries in below format
Name(ABC XYZ) Gender(Male)
AGE(26) BDay(1990-12-09)
My problem is I want to extract the name and the b'day from the file for all the records. I am trying to list it like below:
ABC XYZ 1990-12-09
Can someone please help me with the scripting
Something like this maybe:
awk -F"[()]" '/Name/ && /Gender/{name=$2} /BDay/{print name,$4}' file.txt
That says... "treat opening and closing parentheses as field separators. If you see a line go by that contains Name and Gender, save the second field in the variable name. If you see a line go by that contains the word Bday, print out the last name you saw and also the fourth field on the current line."
I have a list of IDs like so:
11002
10995
48981
And a tab delimited file like so:
11002 Bacteria;
10995 Metazoa
I am trying to delete all lines in the tab delimited file containing one of the IDs from the ID list file. For some reason the following won't work and just returns the same complete tab delimited file without any line removed whatsoever:
grep -v -f ID_file.txt tabdelimited_file.txt > New_tabdelimited_file.txt
I also tried numerous other combinations with grep, but currently I draw blank here.
Any idea why this is failing?
Any help would be greatly appreciated
Since you tagged this with awk, here is one way of doing it:
awk 'BEGIN{FS=OFS="\t"}NR==FNR{ids[$1]++;next}!($1 in ids)' idFile tabFile > new_tabFile
BTW your grep command is correct. Just double check if your file is not formatted for windows.