How can I use fsecurity apply on a NetApp filer to reset NTFS permissions? (ONTAP 7-MODE) - netapp

I have a NetApp filer, with a CIFS export. The permissions have been locked down on it, to a point where it's no longer accessible.
I need to reset the permissions on this - I've figured out I can probably do this by changing the qtree to Unix security mode and back again (provided I'm prepared to unexport the share temporarily).
However, I think I should be able to use the fsecurity command to do this. There's just one problem - the manpage example refers to 'applying ACLs from a config file':
https://library.netapp.com/ecmdocs/ECMP1196890/html/man1/na_fsecurity_apply.1.html
But what it doesn't do, is give me an example of what a 'security definition file' actually looks like.
Is anyone able to give me an example? Resetting a directory structure to Everyone/Full Control is sufficient for my needs, as re-applying permissions isn't a problem.

Create a conf file containing the following:
cb56f6f4
1,0,"/vol/vol_name/qtree_name/subdir",0,"D:P(A;CIOI;0x1f01ff;;;Everyone)"
Save it on your filer somewhere (example in manpage is /etc/security.conf).
Run:
fsecurity show /vol/vol_name/qtree_name/subdir
fsecurity apply /etc/security.conf
fsecurity show /vol/vol_name/qtree_name/subdir
This will set Everyone / Full Control: inheritable. Which is a massive security hole, so you should now IMMEDIATELY go and fix the permissions on that directory structure to something a little more sensible.
You can get create more detailed ACLs using the 'secedit' utility, available from NetApp's support site. But this one did what I needed it to.

Related

SELinux init_daemon_domain(avahi_t,avahi_exec_t) vs. files_types(avahi_t)

I am running into a problem with labeling. In order to lock down access to a file /etc/avahi/avahi-daemon.conf I decided to label it as a part of the avahi_t domain.
I am working on an embedded system. When I boot up the system from a version update, the file system is relabeled with the .autorelabel flag set.
Unfortunately the file /etc/avahi/avahi-daemon.conf remains in the unlabeled_t type. Due to the label being wrong, it is unable to read the file and avahi fails to initialize properly with an avc read denied on an unlabeled_t file. I want to have the label correctly set and not modify policy to read an unlabeled file. I also want it to be protected so the configuration can not be modified.
I have properly labeled it in the .fc file with the following:
/etc/avahi/avahi-daemon.conf -- gen_context(system_u:object_r:avahi_t,s0)
When I try a restorecon on the file system it attempts to relabel the file but is blocked by SELinux with a relabelto avc violation. Similarly changing it with chcon -t fails to change it. I do not wish to open relabelto up on an embedded system as it can then be relabeled and take down the avahi initialization. If I take out the SD card, and relabel the file on a different system. And place it back into the target system it is properly labeled. And avahi operates correctly. So I am certain that the labeling is causing the problem.
In looking in the reference policy an init_daemon_domain(avahi_t,avahi_exec_t) is being performed.
In looking at the documentation for init_daemon_domain() it states the following:
"The types will be made usable as a domain and file, making calls to domain_type() and files_type() redundant."
This is unusual in that if I add files_type(avahi_t) to the .te file, it properly labels after version update.
I am really wanting to know more information about this, and unfortunately my searches on the internet have been less than fruitful in this regard.
Is the documentation for SELinux wrong? Am I missing something about init_daemon_domain() in that it only works with processes and not files?
Or is the files_type(avahi_t) truly needed?
I know this comes off as a trivial issue since there is a path to where it is working. However I am hoping to get an explanation as to why files_type(avahi_t) is necessary?
Thanks

Perserving File Ownership Win7 Share

I am trying to setup a "dropbox" on a Win7 workstation we will use to process simulation jobs. My plan was to pull ownership from file (do a simple dir /q "filename") so I can use the owner information during the simulation (send them an email when done for example).
The problem I have is when the user drops the simulation file on the share I setup, the ownership is set to BUILTIN\Administrators. I have tried tweak the share settings but so far nothing seems to work.
I do have a work around where users can embed their email address in the simulation file and I could pull that. But trying to make it easier as I know somer user will forget to do that... Any ideas how to preserver the ownership inforamation?
Quite possibly, you could embed the owner's email as an alternate data stream into the file. Read this link here.
And with a few powershell scripts, you could write the job owner into the file at submit time and extract it out on the remote machine at run time.
I believe the alternate data stream survives the command line copy command as long as NTFS is used everywhere as the filesystem.

WindowsAzure: Is it possible to set directory permissions within the web.config?

A PHP scriptof mine wants to write into a log folder, the resulting error is:
Unable to open the log file "E:\approot\framework\log/dev.log" for writing.
When I set the writing permissions for the WebRole User RD001... manually it works fine.
Now I want to set the folder permissions automatically. Is there an easy way to get it done?
Please note that I'm very new to IIS and the stuff around, I would appreciate precise answers, thx.
Short/Technical Response:
You could probably set permissions on a particular folder using full-trust and a startup taks. However, you'd need to account for a stateless OS and changing drive letters (possible, not likely) in this script, which would make it difficult. Also, local storage is not persisted, so you'd have no way to ensure this data stayed in the case of a reboot.
Recommendation: Don't write local, read below ...
EDIT: Got to thinking about this, and while I still recommend against this, there is a 3rd option: You can allocate local storage in the service config, then access it from PHP using a dll reference, then you will have access to that folder. Please remember local storage is not persisted, so it's gone during a reboot.
Service Config for local:
http://blogs.mscommunity.net/blogs/dadamec/archive/2008/12/11/azure-reading-and-writing-with-localstorage.aspx
Accessing config from php:
http://phpazure.codeplex.com/discussions/64334?ProjectName=phpazure
Long / Detailed Response:
In Azure, you really are encouraged to approach things as a platform and not as "software on a server". What I mean there is that ideas such as "write something to a local log file" are somewhat incompatible with the cloud "idea". Depending on your usage, you could (and should) convert this script to output this data to some cloud-based or external storage, vs just placing it on the disk.
I would suggest modifying this script to leverage the PHP Azure SDK and write these log entries out to table or blob storage in Azure. If this sounds good, please provide the PHP and I can give an exact example.
The main reason for that (besides pushing the cloud idea) is that in Azure, you cannot assume the host machine ("role instance") will maintain an OS state, so while you can set some things such as folder permissions, you can't rely on them sticking that way. You have no real way to guarantee those permissions won't be reset when the fabric has to update your role and react to some lower level problem. For example, a hard-drive cage on the rack where your current instance lives could fail. If the failure were bad enough, the Fabric controller would need to rebuild your instance. When that happens, your code is moved to an entirely different server, so the need would arise to re-set those permissions. Also, depending on the changes, the E:\ could all of a sudden need to be the F:\ or X:\ drive and you wouldn't know.
Its much better to pretend (at some level) that your application is running "in Azure" and not "on a server in azure", so you make no assumptions about the hosting environment. So anything you need outside of your code (data, logs, audits, etc) should be stored somewhere you can control (Azure Storage, external call-out, etc)

Getting Extra Information From WinAPI File Change Notifications

The MSDN has a pretty good example of getting notified when a file or directory is changed.
However, I can't find any way to get extra information such as the user/machine name associated with the change notification.
For example, I've setup a share X:\Foo from my machine. I would like to log the user/machine names that make changes to my shared directory.
Is this possible to accomplish?
You can't using FFCN. You can set the security descriptor on the files in the shared directory to cause file accesses to be logged to the event log.
This KB article has some information about how to enable auditing on files.

Modifying/detecting Local Security Policy programmatically

Is it possible to do at least one of the following:
1) Detect a setting of a Local Security Policy (Accounts: Limit local account use of blank passwords to console logon only)
2) Modify that setting
Using Win32/MFC?
Well, I think I figured out how to do the first part (detecting this setting). It's actually located in the registry:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa
the key is "LimitBlankPasswordUse", if it's 1 then it's Enabled, otherwise Disabled.
So, reading that will at least show me if I need to tell the user to modify it or not. I doubt I can change it though...
I've been down this road before and ended up with:
http://groups.google.com/group/microsoft.public.platformsdk.security/browse_thread/thread/63d884134958cce7?pli=1
I was able to configure User Rights Assignments using the Lsa* functions in advapi32.dll but could never work out how to configure Security Options.
This may be of help though:
http://www.windowsdevcenter.com/pub/a/windows/2005/03/15/local_security_policies.html
http://support.microsoft.com/default.aspx?scid=214752
You could customise a template then run regsvr32 %windir%\system32\scecli.dll from inside your code.
Not elegant but might be a way.

Resources