I am able to get/set security attributes (group, owner, DACL, SACL) of files on a NTFS volume by using the GetSecurityInfo/SetSecurityInfo API. The handles I pass to these APIs must be opened with specific access rights (READ_CONTROL, ACCESS_SYSTEM_SECURITY, WRITE_DAC, WRITE_OWNER) which require certain privileges (SE_SECURITY, SE_BACKUP, SE_RESTORE) to be enabled while creating them with CreateFile, which is no problem at all if the files are located on an NTFS volume, and of course if the calling process has sufficient rights. There is a problem, however, if the files are actually located on a network share - creating the file handles would fail with ACCESS_DENIED(5) or PRIVILEGE_NOT_HELD(1314). I guess this is due to the fact that the attempt to create the file handle is actually made on the remote machine in the context of a network logon session which represents my user on the remote machine, and the required privileges are not enabled for that remote process. Is there a way I can get past this limitation, i.e. be able to get/set security attributes of files on network shares?
A similar problem is getting a handle to a directory on a network share. While being able to do it locally (by using FILE_FLAG_BACKUP_SEMANTICS), I understand that this particular flag is not redirected to the remote machine, which I believe is the reason I can't open a handle to a directory on a network share. Is there a way to do this?
Well, it seems I was the one at fault here - I have been testing this case with a user which, although an administrator on my local machine, is a regular restricted user on the file server, which caused all the trouble. You can copy security attributes and open handles to directories on a network share if you connect to it with a user which has sufficient rights on the file server which is sharing the resources.
Related
Our application uses a shared directory to store files that are 'checked out', modified via our application, and then 'checked in' to a shared directory, which is being accessed via SMB (The environment is hosted on a set of AWS servers, and our end-users access it via Citrix).
All users have read, write, etc. permissions for the shared directory.
We've recently changed the architecture of the application a bit. Previously, each user had his/her own subdirectory in the shared directory where the checked in/checked out file was stored.
In the new architecture, the individual subdirectories aren't used, so that all files checked in by users are stored directly into the shared directory. Users are then allowed to checkout/checkin any file in the shared directory.
The checkout process involves doing a File.Copy of the library version of the file to the user's local, non-shared directory. The user then uses our application to make changes to the file, which is then saved locally and File.Copy'd back into the shared directory.
Except that in the new architecture, the 'checkout' operation fails when User 2 attempts to checkout a file that was originally checked in by User1. As near as we can tell, this is because when, for example, User 1 checks a file in to the shared directory, the file implicitly receives a security entry for his specific AD login. A subsequent File.Copy operation by User 2 gets a permission error. If a full admin for the host system removes the security entry for User 1, then the File.Copy from the shared directory to User 2's local directory works fine.
Note that both users are assigned to a group with read, write, modify, etc to the shared directory (but not 'full control').
This doesn't seem like that unusual of a situation. We haven't (yet) tried to see if the application can programmatically remove the security entry created on the checkin - even assuming that's possible, it would be nice not to have to resort to that. But we've not found any arrangement of security settings that works.
Any information or suggestions will be appreciated.
Thanks...
As requested:
\\citrixfile01\Shares\clients\002\library
ALIGHTENT\002.EightTwoConversion:(I)(OI)(CI)(M)
NT AUTHORITY\LOCAL SERVICE:(I)(OI)(CI)(RX)
S-1-5-21-3973462947-2300097736-545649627-500:(I)(OI)(CI)(F)
ALIGHTENT\citrix:(I)(OI)(CI)(F)
ALIGHTENT\alightcalc:(I)(OI)(CI)(M)
ALIGHTENT\Domain Admins:(I)(OI)(CI)(F)
BUILTIN\Administrators:(I)(OI)(CI)(F)
\\citrixfile01\Shares\clients\002\library\AML_AmPac_8.2.amox
ALIGHTENT\002.Admin1:(I)(M)
ALIGHTENT\citrix:(I)(F)
ALIGHTENT\AEAdmin2:(I)(M)
ALIGHTENT\Domain Admins:(I)(F)
BUILTIN\Administrators:(I)(F)
When a file is moved, Windows does not update the ACL to add or remove inherited permissions. This is presumably for backwards-compatibility reasons; the permissions model looked somewhat different in the earliest versions of Windows NT.
Your options are to copy the file instead of moving it, or to explicitly reset the permissions after the file has been moved.
If you want to explicitly reset the permissions, you can do this using File.SetAccessControl. To apply the inherited permissions for the new location, the FileSecurity object should contain an empty ACL and the AreAccessRulesProtected property should be false.
Admittedly this question I'm asking is just to assist me in an argument I've been scheduled to have with a client.
Our Dev's who reside in another country have an FTP server which has mostly full public access available to all anonymous users, this to simplify the acquisition of new documents and updates for users of the application.
One directory in specific, let's call it updates, actually houses all the new updates but does not grant a directory list to anonymous users due to Access restrictions, so if you try to list the files in the directory using an FTP client, you're met with the generic response:
550 Access is denied.
Failed to retrieve directory listing
However, if you have the exact URL for a file available to the anonymous users in that directory e.g. ftp://ftp.company.com/updates/latest_update_1.zip you can very easily download that file without issue.
My question comes in that I have a client who is somehow monitoring that directory as an anonymous user and knows when a new file (which anonymous has access to) becomes available in that directory and then immediately downloads it. This directly affects their application as often times files are dropped there by Devs during QA and they're not officially available as we've not yet sent out notice of the change log and URL.
So my question is, how exactly is this client doing this? How is he able to list files that anonymous has access to, in a directory which does not list it's files to anonymous users?
I want to read exclusive-opened files on Windows on a user-specified volume.
The established way to do this is to take a VSS snapshot.
Taking a VSS snapshot generally requires administrative permissions, so my application is split into an unelevated component and a SYSTEM service. Right now, the SYSTEM service initiates the snapshot and reads its files.
So far so good - as long as the files are accessible by both the SYSTEM user and the unelevated regular user. But of course, this is different user can have different mount paths, different network shares, different file authentication, and possibly even different Bitlocker access. My approach stops working as soon as a mounted network path is selected.
How can i take a VSS snapshot, having access to all the unelevated user's file paths?
Suppose my Azure role creates a lot of temporary files in Windows temporary folder and forgets to delete them. At some point it will receive "can't create temporary file" error. Suppose that once that happens my role code throws an exception out of RoleEntryPoint.Run() and the role is restarted.
I'm not talking about perfect Azure aware code here. My role might use third-party black box code that would now nothing about Azure and "local storage" and would just call System.IO.Path.GetTempPath() and thus create files right in some not Azure friendly location.
The problem is that if the role is started on the very same host and the temporary folder is not cleaned up by some third party the folder is still full of files and the role will be unable to function. According to this answer it might happen that local changes are preserved for my role which is a huge problem in the above scenario.
Are local changes like created temporary files guaranteed to be reset when a role is restarted? How do I ensure that the started role is in reasonably clean state?
The role gets reset on new deployments, upgrades, and newly scaled instances from the golden image (base guest OS vhd). Generally for reboots and crashes, you get the same VHD and machine.
The code you write will not have permission to write to the OS drive (D:) - without elevation that is (or logging in via RDP to do this). Further, there is a quota on the user's role root drive (E:) that will prevent you from accidentally filling the drive with files. This used to be 10% of the package size was all you were allowed to write. There is also a quota on the resource drive (C:), but that is much more generous and depends on VM size.
Nothing will be cleaned up on non-local resource drives but you will eventually get errors if you try to exceed quotas. You can turn off sticky storage on local resources and they will be cleaned up on reboot. Of course, like other changes to the disk, these non-local resource temp files will occasionally be lost when the guest OS is upgraded (or underlying root OS). If you are running elevated and really screw up your installation (which you can do), you will need to hit the "Reimage" button on the portal and it will all go back to the golden image.
I have a Windows service developed in VB.NET. This Windows service picks a file every night at 8 PM from copies a file from my C:\ftpDocs to Y:\FtpDocs folder.
Y: is a mapped drive which is \\sourceServer\Output files. When I run the same code from a VB.NET Windows application instead of a Windows service it is working absolutely fine. But from the service it is throwing access denied error accessing \\sourceServer\Output.
It seems the Windows service runs from C:\windows\system32. For this reason I tried changing the current directory to C:\ftpService (This is the folder where my application is).
To access the mapped drive I provide a userid and password which is not my Windows userid and password. Do you think this is the reason why it is not able to access it from the Windows service?
If yes, how is it working from Windows application? This issue is not going away for the past one month now.
What drives are currently mapped is maintained per user -- it'd be a big no-no for me to be able to access files on a share on which you have credentials just because we're both logged on at the same time.
Your service will need to map the share itself using saved credentials of some kind (you could hard code them, if you like, though that's not terribly secure and represents a maintainability burden besides). A good example of how to do this is here -- though, I haven't used this code, I've just read the article.
Typically a Windows service runs under an id whose credentials are not authorized to access files on the network. Try running your windows service under the domain account which can access the network files. Make sure that this account has access to both the network and local folders/files that it will be reading and writing.
Also, you'll want to use the UNC path, not a mapped drive. The mapped drive won't be mounted for the service.