I am attempting to use LDIFDE to export users, groups, and OUs for importing into a different domain. I'm at the stage where I'm importing users, and I've noticed a problem.
I utilized this command to export users from my domain controller:
ldifde -f Exportuser.ldf -p subtree -r "(&(objectCategory=person)(objectClass=User)(givenname=*))" -l "cn,givenName,objectclass,samAccountName"
This is the bare minimum required, because if you run the command without the -l filters, you end up with data that wont import, and stops the import.
Anyways, I get users out perfectly fine with the script (that is, it runs), and they even import with the -i -f Exportuser.ldf command no problem. However, while importing the group membership associations I got some errors that gave me cause to look closer at what I'd created. To wit: The command above skipped quite a few users in the process of export. These range from system accounts to actual active users, though the "hardest" hit was an OU where we keep deactivated accounts. Not a single strange flag goes up, either.
I can't seem to find anything special about these various users that they'd have in common, but I'm still looking. In the mean time, I thought I'd check to see if anyone here had any history of LDIFDE simply just blowing past users.
Thanks,
M.
Solution:
The filter (givenname=) does not handle null values. The givenname attribute refers to the First and Last Name fields for the User account (not the display name). Therefore, if those fields have no value, (givenname=) will filter those users out.
M.
Related
Goal
I wish to use RRDTool to count logical "user activity" from our web application's apache/tomcat access logs.
Specifically we want to count, for a period, occurrences of several url patterns.
Example
We have two applications (call them 'foo' and 'bar')
These url's interest us. They indicate when users 'did interesting stuff'.
/foo/hop
/foo/skip
/foo/jump
/bar/crawl
/bar/walk
/bar/run
Basically we want to know for a given interval (10 minutes, hour, day, etc.) how many users: hopped,skipped,jumped,crawled, walked, etc.
Reference/Starting point
This article on importing access logs into RRDTool seemed like a helpful starting point.
http://neidetcher.com/programming/2014/05/13/just-enough-rrdtool.html
However to clarify, this example uses the access log directly , whereas we want to a handful of url's 'in buckets' and count the 'number in each bucket'
Some Scripting Required..
I could do this with bash & grep & wc --iterating through the patterns, sending output to an 'intermediate results' text file....but believe RRDTool could do this with minimal 'outside coding'
That said, I believe RRDTool could do this with minimal 'outside coding'--but am unclear on the details.
Some points
I mention 'two applications' because we actually serve them up from separate servers with different log file formats. I'd like go get them into the same RRA file
Eventually I'd like to report this in cacti; initially however, I wanted to understand RRDTool details
Open to doing any coding, but would like to keep it as efficient as possible--both administratively and computer-resources. (By administratively, I mean: easy to monitor new instances)
I am very new to RRDTool and am RTM'ing . (and Walking through the Tutorial). I'm used to relational databases and spreadsheets, etc and don't have my mind around all the nuances of the RRA format.
Thanks in advance!
You could setup a separate RRD file with ABSOLUTE type datasources for each address you want to track.
Then you tail the log file and whenever you see one of the interesting urls rush by you call:
rrdtool update url-xyz.rrd N:1
The ABSOLUTE data source type is like a counter, but it gets reset every time it is read. Your counter will just count to one, but that should not be a problem.
In the example above I am using N: and not the timestamp from the access log. You could also use that if you are not doing this in real time ... but beware that you can not update the same rrd file twice at the same time. N: will use milli timestamps internally and thus probably avoid this problem.
On the other hand it may make more sense to accumulate matching log entries with the same timestamp and only update rrdtool with that number once the timestamp on the logfile changes.
I am working on ubuntu server with 10 users at any point of time. We usually keep our code there and use the server to make builds. The build usually takes 30 to 50 minutes based on concurrency defined. The build command is make -jX where X can be anything from 1 to 24.
My problem starts when many users start giving make command with higher X value. Is there any way to block these commands or to put any limit.
For example, if someone gives make -jX (X>4), I should be able to override the command as make -j4.
I know one way is to use alias but I have no idea how to interpret the argument value through alias (like alias ll='ls -la' in .bashrc file is ok but how to interpret ll -lha through bashrc).
Also is there any way to make the alias work for all the users without editing the bashrc files of all the users?
Thanks in advance.
Although limiting the parameters to a particular command (in this case make) in a way that cannot be circumvented is generally hard, you can configure system-level process limits on each user using /etc/security/limits.conf.
If you open this file on your system, you will see in the comments that you can limits various user resources such as nproc and memory. If you play with these limits, you may be able to get something reasonable to get fair resource sharing among your developers.
I have made a little function that deletes files based on date. Prior to doing the deletions, it lets the user choose how many days/months back to delete files, telling them how many files and how much memory it would clean up.
It worked great in my test environment, but when I attempted to test it on a larger directory (approximately 100K files), it hangs.
I’ve stripped everything else from my code to ensure that it is the get_dir_info() function that is causing the issue.
$this->load->helper('file');
$folder = "iPad/images/";
set_time_limit (0);
echo "working<br />";
$dirListArray = get_dir_file_info($folder);
echo "still working";
When I run this, the page loads for approximately 60 seconds, then displays only the first message “working” and not the following message “still working”.
It doesn’t seem to be a system/php memory problem as it is coming back after 60 seconds and the server respects my set_time_limit() as I’ve had to use that for other processes.
Is there some other memory/time limit I might be hitting that I need to adjust?
from the CI user guide the get_dir_file_info() is:
Reads the specified directory and builds an array containing the filenames, filesize, dates, and permissions. Sub-folders contained within the specified path are only read if forced by sending the second parameter, $top_level_only to FALSE, as this can be an intensive operation.
so if you are saying that you have 100k files then the best way to do it, is to cut it into two steps:
First: use get_filenames('path/to/directory/') to retrieve all your files without their information.
Second: use get_file_info('path/to/file', $file_information) to retrieve a specific file info, as you might not need all the file information immediately. it can be done on file name click or something relevant.
the idea here is not to force your server to deal with large amount of process while in production. that would kill two things, responsiveness, and performance (I haven't found a better definition for performance) but the idea here is clear.
I want to write a program that allows or blocks processes while openning a file depending on a policy.
I could make a control by checking the name of the program. However, it would not be enough because user can change the name to pass the policy. i.e. let's say that policy doesn't allow a.exe to access txt files whereas b.exe is allowed. If user change a.exe with b.exe, i cannot block it.
On the other hand, verifying portable executable signature is not enough for me, because i don't care whether the executable signed or not. I just want to identify the executable that is wanted to execute even its name is changed.
For this type of case, what would you propose? Any solutions are welcome.
Thanks in advance
There are many ways to identify an executable file. Here is a simple list:
Name:
The most simple and straightforward approach is to identify a file by its name. But it is one of the easiest things to change, and you already ruled that out.
Date:
Files have an access, creation, and modification date, and they are managed by the operating system. They are not foolproof, or maybe not even accurate.
Also, they are very simple to change.
Version Information:
Since we are talking about executable files, then most executable files have version information attached to it. For example, original file name, file version, product version, company, description, etc. You can check these fields if you are sure the user cannot modify them by editing your executable. It doesn't require you to keep a database of allowed files. However, it does require you to have something to compare to, like company name, or a product name. Also, If someone made an executable with the same value
they can run instead of the allowed one and bypass your protection.
Location:
If the file is located in a specific place and is protected by file system access rights, and it cannot be changed, then you can use that. You can, for example, put the allowed files in a folder where the user (without admin rights) can only read/execute them, but not rename/move. Then identify the file with its location. If it is run from this location, then allow it, else block it. It is good as it doesn't need a database of
allowed/blocked files, it just compares the location, it it is a valid one, then allow, and you can keep adding and removing files to allowed locations
without affecting your program.
Size:
If the file has a specific file size, you can quickly check its size and compare it. But it is unreliable as files can be changed/patched and without any change in size. You can avoid that by also applying a CRC check to detect if the content of the file changed.
But, both size and CRC can be changed. Also, this requires you to have a list of file names and their sizes/CRC, and keeping it up to date.
Signature:
Deanna mentioned that you can self-sign your executable files. Then check if the signature matches yours and allow/deny based on that. This seems to be a good way if
it is okay for you to sign all the executable files you want to allow. It doesn't require you to keep an updated list of allowed files.
Hash:
arx also pointed out, that you can hash the files. It is one of the slowest methods, as it requires the file to be hashed every time it is executed, then compared to a list of files. But it very reliable as it can uniquely identify each file and hard to break. But, you will need to keep an up to date database of every file hash.
Finally, and depend on your needs and options, you can mix two or more ways together to get the results you want. Like, checking file name + location, etc.
I hope I covered most of things, but I'm sure there are more ways. Anyone can freely edit my post to include anything that I have missed.
I would recommend using the signature, if it has one, or the hash otherwise. Apps such as Office that update frequently are more likely to be signed, whereas smaller apps downloaded off the Internet are unlikely to ever be updated and so should have a consistent hash.
Ok, I have tried to google this and keep running into things that are close, but not quite there. I mess with them for a few hours and can't bridge it across to what I need.
Requirements: Read a list of computer names and add them to specific OUs.
The list can be formated however, but right now I have it as a csv.
/////////
Comp1,Computers,cold,Alaska,mydomain,com,
Comp2,servers,New Jersey,test,temp,training,Room3,trainers,mydomain,com,
Comp3,computers,New Jersey,test,temp,training,Room3,students,restricted,mydomain,com
Comp4,computers,New Jersey,test,temp,training,Room3,students,power users,mydomain,com
////////
As you can see, the domains portion is not the same on all the machines.
I tried using a vbscript but all I would get is "unable to connect to LDap" so I was thinking about storing the lines in an array and using dsadd and building the command line from the variables in the array.
I already have the portion written to browse for the file, and dsquery, dsadd, etc are all on the server that this will be run from.
This is probably a lot easier than I am trying to make it, I tend to over complicate things if I don't finish it right away.
Look at this:
Automating the creation of computer accounts