Ansible group module - ansible

In the docs there is an option for a system group. What exactly is a system group? I couldn't find this detail anywhere.
If yes, indicates that the group created is a system group.

“System groups” are usually lower numbered than non-system, commonly 0-99. There’s a little relevant info in the groupadd(8) man page:
-r, --system
Create a system group.
The numeric identifiers of new system groups are chosen in the
SYS_GID_MIN-SYS_GID_MAX range, defined in login.defs, instead of
GID_MIN-GID_MAX.
Example groups are http, dbus, wheel, mail.
More details in this Q/A.

Related

What is `kern.num_files` in `sysctl`?

I'm trying to find out total number of currently open file descriptors (or any file related objects) on current OS.
My current betting is sysctl(3), and I think kern.num_files does the job. But I'm not actually sure what it means, and I can't find any manual page entry or standard spec for kern.num_files. This makes me nervous.
kern.num_files is listed on man 3 sysctl, but only names are listed and says nothing about what it actually means.
Command line sysctl -a(2) lists and report some value for kern.num_files.
sysctl.h does not define a name looks like kern.num_files
although it even contains names like KERN_FILE that are considers private/deprecated.
Is this actually the count of system-wide open FD? Where can I find any spec for this?
If kern.num_files is not the number, what is recommended way to get the total open FD count?

Ansible -- approach to hierarchical management of variables

This a common case, but it doesn't seem straight-forward in Ansible.
Let's assume of a hierarchy of groups:
linux-hosts:
application-hosts:
foobar-application-hosts:
foobar01
Now for each of these groups we want to define a set of cron jobs.
For linux-hosts, jobs that run on all linux hosts.
For application-hosts, jobs that run on only application hosts.
For foobar-applciation-hosts, jobs that run on only foobar-applcation-hosts.
The variable name is cronjobs, say, and it's a list of cron module settings.
By default, the foobar-application-hosts would clobber the setting for anything above it. Not good.
I don't see an easy way to merge (on a specific level). So I thought, all right, perhaps Ansible exposes the individual group variables for the groups a host belongs to during a run. There is groups, and there is group_names, but I don't see a groupvars corresponding to hostvars.
This seems to imply I either use to some mix-and-match of cycling over groups, dynamically importing vars (if possible), and doing the merge myself. Perhaps putting some of this in a role. But this feels like such a hack. Is there another approach?
Groups in the Ansible sense is a "tag" on hosts. Hosts can belong to more than one group. So, the conjobs var should be a list, with the same length as the number of groups that the host is in.

Mask value not shown in GETFACL using webhdfs

In Hadoop, i have enabled authorization. I have set few acl for a directory.
When i execute getfacl command in hadoop bin, i can see mask value in that.
hadoop fs -getfacl /Kumar
# file: /Kumar
# owner: Kumar
# group: Hadoop
user::rwx
user:Babu:rwx
group::r-x
mask::rwx
other::r-x
If i run the same command using webhdfs, mask value not shown.
http://localhost:50070/webhdfs/v1/Kumar?op=GETACLSTATUS
{
"AclStatus": {
"entries": [
"user:Babu:rwx",
"group::r-x"
],
"group": "Hadoop",
"owner": "Kumar",
"permission": "775",
"stickyBit": false
}
}
What the reason for not showing mask value in webhdfs for GETFACL command.
Help me to figure it out.
HDFS implements the POSIX ACL model. The linked documentation explains that the mask entry is persisted into the group permission bits of the classic POSIX permission model. This is done to support the requirements of POSIX ACLs and also support backwards-compatibility with existing tools like chmod, which are unaware of the extended ACL entries. Quoting that document:
In minimal ACLs, the group class permissions are identical to the
owning group permissions. In extended ACLs, the group class may
contain entries for additional users or groups. This results in a
problem: some of these additional entries may contain permissions that
are not contained in the owning group entry, so the owning group entry
permissions may differ from the group class permissions.
This problem is solved by the virtue of the mask entry. With minimal
ACLs, the group class permissions map to the owning group entry
permissions. With extended ACLs, the group class permissions map to
the mask entry permissions, whereas the owning group entry still
defines the owning group permissions.
...
When an application changes any of the owner, group, or other class
permissions (e.g., via the chmod command), the corresponding ACL entry
changes as well. Likewise, when an application changes the permissions
of an ACL entry that maps to one of the user classes, the permissions
of the class change.
This is relevant to your question, because it means the mask is not in fact persisted as an extended ACL entry. Instead, it's in the permission bits. When querying WebHDFS, you've made a "raw" API call to retrieve information about the ACL. When running getfacl, you've run an application that layers additional display logic on top of that API call. getfacl is aware that for a file with an ACL, the group permission bits are interpreted as the mask, and so it displays accordingly.
This is not specific to WebHDFS. If an application were to call getAclStatus through the NameNode's RPC protocol, then it would see the equivalent of the WebHDFS response. Also, if you were to use the getfacl command on a webhdfs:// URI, then the command would still display the mask, because the application knows to apply that logic regardless of the FileSystem implementation.

Modify the default WorkManager in WebSphere 7 using a wsadmin script

I want to raise the maximum number of threads in the default work manager's thread pool using a wsadmin (Jython) script. What is the best approach?
I can't seem to find documentation of a fine-grained control that would let me modify just this property. The closest I can find to what I want is AdminTask.applyConfigProperties, which requires passing a file. The documentation explains that if you want to modify an existing property, you must extract the existing properties file, edit it in an editor, and then pass the edited file to applyConfigProperties.
I want to avoid the manual step of extracting the existing properties file and editing it. The scripts needs to run completely unattended. In fact, I'd prefer to not use a file at all, but just set the property to a value directly in the script.
Something like the following pseudo-code:
defaultwmId = AdminConfig.getid("wm/default")
AdminTask.setProperty(defaultwmId, ['-propertyName', maxThreads, '-propertyValue', 20])
The following represents a fairly simplistic wsadmin approach to updating the max threads on the default work managers:
workManagers = AdminConfig.getid("/WorkManagerInfo:DefaultWorkManager/").splitlines()
for workManager in workManagers :
AdminConfig.modify(workManager, '[[maxThreads "20"]]')
AdminConfig.save()
Note that the first line will retrieve all of the default work managers across all scopes, so if you want to only choose one (for example, if you only one to modify a particular application server or cluster's work manager properties), you will need to refine the containment path further. Also, you may need to synchronize the nodes and restart the modified servers in order for the property to be applied at runtime.
More information on the use of the AdminConfig scripting object can be found in the WAS InfoCenter:
http://publib.boulder.ibm.com/infocenter/wasinfo/v7r0/index.jsp?topic=/com.ibm.websphere.nd.doc/info/ae/ae/rxml_adminconfig1.html

Unix UIDs vs Windows SIDs - why?

From what I've read UIDs in Unix are assigned by the administrator while the SIDs are random in Windows. Is there a security reason behind this, or is it just different ways to solve IDs?
Thanks
While you may edit /etc/passwd (and /etc/shadow) by hand on a Unix machine, the standard way to add users is through a useradd utility (or similar) which should automatically assign the next available UID. So they should be assigned automatically rather than by the administrator. SIDs are more complicated (i.e. hierarchical) so assigning them by hand would be even more cumbersome (and besides, you cannot update the SAM database by hand anyway).
As to assigning them randomly, the SID's random part is the Machine SID, which gives SID the advantage of being unambiguous (as opposed to Unix UIDs). For example, if MACHINE1 has local user ALICE and an NTFS volume with some files owned by MACHINE1\ALICE, when you plug this volume into MACHINE2, it won't make a mistake of thinking those files are owned by some local MACHINE2 user which just happens to have the same SID (whether named ALICE or otherwise).
On Unix, if alice had UID 501 on MACHINE1, then then you plug the same volume into MACHINE2 where UID 501 belongs to bob, ls will show the files as belonging to bob (rather than to alice or even to an 'unknown UID').
UUIDs and SIDs are essentially the same thing.
They're a combination of a system specific part and a timestamp, generated according to a specific algorithm (which might be different between implementations, but that's irrelevant).
Essentially they're both semi-random. Maybe some Unix admins are convinced there's some "security" reason for not handing them out or whatever, but that's nonsense.
The windows SID is a GLOBALLY Unique Identifier vs the Unix UID which is not globally unique.

Resources