I botched a DC's AD / DNS pretty bad over the course of several years (of learning experiences) to the point where I could no longer join or leave the domain with clients. I have a NAS that used to plug into AD via SMB and that is how all the users (my family) used to access their files.
I have recreated my infrastructure configuration from scratch using Windows 2016 using best practices this time around. Is there any way to easily migrate those permissions to users in a new domain/forest (that are equivalent in value to the old one)?
Could I possibly recreate the SIDs / GUIDs of the new users to match the old? I'm assuming no because they have a Windows installation-unique generated string in there.
Could I possibly do this from the NAS side without having to go through each individual's files to change ownership?
Thank you.
One tool you can use to translate permissions from original SIDs to new SIDs is Microsoft's SubInACL
SubInACL will need from you information which old SID corresponds to which new SID or username and execute translation for all data on NAS server. For example like this
subinacl /subdirectories "Z:\*.*" /replace=S-1-5-1-2-3-4-5=NEWDOMAIN\newuser
How long it will take for translation to complete depends on number of files and folders, if it's tens of thousands expect hours.
There are also other tools like SetACL or PowerShell cmdlets Get-Acl/Set-Acl
You cannot recreate objects with original SIDs and GUIDs unless you're doing restore of the AD infrastructure or cloning/migrating original identities into new ones with original SID in sidHistory attribute.
So if you're already running domain controller with NAS in newly created forest and old one suffered from issues you wanted fixed that option would be probably much more painful and it's easier to go for SID translation.
Related
I am running a windows 2016 server, we are running IIs 10 on it and i need to be able to assert if there is an AppPool setup before i deploy a website. If it doesn't exist i need setup the AppPool with a specific user and password.
All of this is done using a release agent through Azure Devops.
The agent is running as a NON-ADMIN, and i all accounts involved are running as NON-ADMIN. I have no intention at all to run any admin accounts, for security reasons i want to give least privildges to all accounts involved.
when i try to set up a AppPool using appcmd.exe i get the error msg:
KeySet does not exist.
When running everything as admin it works (and i have absolutely no intention in running any of this as admin).
What i have tried:
i have added the non-admin account to the IIS_IUSRS group.
Made sure that the user has read permissions to the file: 76944fb33636aeddb9590521c2e8815a_GUID in the %ALLUSERSPROFILE%\Microsoft\Crypto\RSA\MachineKeys folder.
i have tried everything here: Error when you change the identity of an application pool by using IIS Manager from a remote computer
anyone that actually knows the cause of this problem?
UPDATE:
Microsoft clearly recommends that agents should be run using service accounts, which i am doing and i have no interest in giving build agents administrative rights to 1000s of servers when they clearly don't need that kind of powers actually. I want to restrict their powers to only be allowed to do what they need to do. I can't believe that giving everything admin is apparently the norm.
After a lot of googling, and i mean A LOT. I managed to solve this. And let me say, that it baffles me that "least privileged accounts" is not common practice in the Microsoft and windows world.
I found this excellent post by InfoSecMike locking down azure devops pipelines.
And we both have the exact same requirements and opinions on this topic.
You CLEARLY don't need admin rights to update IIs configurations (because that would be insane, right!?). The IIs configuration API does not care what rights you have, what you do need is access to certain files. But this is not documented. Microsoft themselves, just for simplicity, tells you that you need to be admin, and buries all the details really deep in documentation when this should be best practice. Also what amazes me is that no one questions it.
What you need is the following:
full access to C:\Windows\System32\inetsrv\Config
full acccess to C:\inetpub
read access to three keys in C:\ProgramData\Microsoft\Crypto\RSA\MachineKeys\
6de9cb26d2b98c01ec4e9e8b34824aa2_GUID (iisConfigurationKey)
d6d986f09a1ee04e24c949879fdb506c_GUID (NetFrameworkConfigurationKey)
76944fb33636aeddb9590521c2e8815a_GUID (iisWasKey)
The 2 first bullet points can be obtained if you make sure your service account is a member of the group IIS_IUSRS.
This group will not give you access to the keys. You need to manually give read permissions to these 3 keys to the agent user.
If you don't give access to these keys you will get the obscure error message
Keyset does not exist ( exception from HRESULT : 0x8009000D)
Which is an incorrect error if you ask me as it should be an IllegalAccessException with proper reason telling you that you don't have access to read the key because the keys are there, they do exist (nice code microsoft, maybe you should open source this so we can fix).
I'll leave with this quote from infosecmike.
The goal was to lock down the permissions of the Azure Pipeline Agent {...}. I started Googling, pretty sure I would find a way to achieve this goal. I didn’t. It’s surprising to not find an answer about this. It seems like the principle of least privilege does not apply anymore in a devops world.
This is why i prefer Linux over Windows. This is a simple task there.
I'm working on a System Service project with SYSTEM privilege (cleaning utility)... It does not interactive with any user interface.
My goal is to check files in "Desktop" and "AppData" folders for any user that exists on the PC.
I'm using NetUserEnum() to get the user list on the PC. Then I want to get the path of each user's Desktop and AppData with SHGetKnownFolderPath(), but I can't find a way to get each user's access token for SHGetKnownFolderPath(). Without a token defined in SHGetKnownFolderPath(), it returns the path for SYSTEM and not specific users.
Q1. How can I get the token of each user for SHGetKnownFolderPath()?
Q2. If no answer for Q1, is there any documented way to get the desktop & appdata path of each user in the PC?
I understand this can be achieved with dirty way ---> Registry key with some string replacement. However, the Registry key method is undocumented, which may easily break in future updates to Windows.
Edit Update:
#RaymondChen Thanks for pointing out that some user profiles may not exist. Also,
About Q1 : #Remy Lebeau provides a solution with LogonUser/Ex(),login to each user with their credentials,might be the only answer that fits the need of Q1.
About Q2 : There might have no documented way to achieve this. The only method might have to stick with Windows Registry (Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders) , as #Remy Lebeau and #Olaf Hess said. I tried to dig more information on Microsoft Community Forum and I got Microsoft would never allow access other users' profile with their native API for security reason. They do not provide APIs that can possibly violate the security rules. Each user profile can only access by its credentials.
btw, I totally understand that "Cleaning utility" aka "Windows-breaking tool", especially when the tool is not being well codded(ex. compatibility problem). For the sake of avoiding to make it become a totally Windows-Destroyer, I tried to use more documented API as possible.
For Windows Vista with SP1 / Server 2008 and better you can query the existing user profiles using the WMI class Win32_UserProfile. This allows you to retrieve the profile path and check whether it is a local or roaming profile and to get status information. The rest (retrieving the paths to APPDATA, DESKTOP, etc.) is likely going to involve reading values straight from the registry (HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders or HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders).
Background
I've recently been shunted into the world of windows programming and I'm still trying to find my way around the best practices and ways of doing things. So I was just hoping for some pointers on use of the registry
Not particularly relevant but the background is that I am creating an installer in Golang, a couple of points to get out the way on that:
I am aware MSI's would usually be best practice for an installer (I have my reasons for going custom exe)
I know there are more obvious language choices than golang, just go with it
Current registry use
As part of the install process, I store several pieces of data in the registry:
run once commands:
Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\RunOnce
I create a few entries here: to restart the process after a system reboot and to delete some temp files on reboot after uninstall
an uninstall entry:
Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\Vendor
Product
Content here is the same as an MSI would create, I was careful not to create any additional custom fields here (all static data until uninstall)
an application entry:
Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Vendor\Product
I store some additional data about the installation here, some of which is needed for uninstall such as state info from before installation (again all static content)
a temporary entry:
Computer\HKEY_CURRENT_USER\SOFTWARE\Vendor\Product
I store some temporary data here which can include some sensitive user entered data (usernames/passwords). I run some symmetric encryption to obscure the data though my understanding is this is area of the registry is encrypted so only the user could access anyway (would like confirmation on that)
This data is used to resume after restart and then deleted
Questions
I'm looking for confirmation / corrections on my current use of the registry?
I now have need to pass some data between an application and a running service, this data would be updated every 1-2 minutes and would be a few bytes of JSON. Does the registry seem like a reasonable place to store variable data like this? If so is there a particular place that better for variable data - I was going to add it to:
Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Vendor\Product
HCKU isn't encrypted to my knowledge. It's stored in a file called NTUser.dat and could be loaded as a hive under HKEY_USERS and visible to other processes with sufficient rights to do so.
You would need to open up the rights to HKLM\SOFTWARE\Vendor\Product if you expect a user priv process to be able to write to it. If you want to pass data to a service you might want to use some sort of IPC pipe to do so. Not sure what's available in Golang for this.
A PHP scriptof mine wants to write into a log folder, the resulting error is:
Unable to open the log file "E:\approot\framework\log/dev.log" for writing.
When I set the writing permissions for the WebRole User RD001... manually it works fine.
Now I want to set the folder permissions automatically. Is there an easy way to get it done?
Please note that I'm very new to IIS and the stuff around, I would appreciate precise answers, thx.
Short/Technical Response:
You could probably set permissions on a particular folder using full-trust and a startup taks. However, you'd need to account for a stateless OS and changing drive letters (possible, not likely) in this script, which would make it difficult. Also, local storage is not persisted, so you'd have no way to ensure this data stayed in the case of a reboot.
Recommendation: Don't write local, read below ...
EDIT: Got to thinking about this, and while I still recommend against this, there is a 3rd option: You can allocate local storage in the service config, then access it from PHP using a dll reference, then you will have access to that folder. Please remember local storage is not persisted, so it's gone during a reboot.
Service Config for local:
http://blogs.mscommunity.net/blogs/dadamec/archive/2008/12/11/azure-reading-and-writing-with-localstorage.aspx
Accessing config from php:
http://phpazure.codeplex.com/discussions/64334?ProjectName=phpazure
Long / Detailed Response:
In Azure, you really are encouraged to approach things as a platform and not as "software on a server". What I mean there is that ideas such as "write something to a local log file" are somewhat incompatible with the cloud "idea". Depending on your usage, you could (and should) convert this script to output this data to some cloud-based or external storage, vs just placing it on the disk.
I would suggest modifying this script to leverage the PHP Azure SDK and write these log entries out to table or blob storage in Azure. If this sounds good, please provide the PHP and I can give an exact example.
The main reason for that (besides pushing the cloud idea) is that in Azure, you cannot assume the host machine ("role instance") will maintain an OS state, so while you can set some things such as folder permissions, you can't rely on them sticking that way. You have no real way to guarantee those permissions won't be reset when the fabric has to update your role and react to some lower level problem. For example, a hard-drive cage on the rack where your current instance lives could fail. If the failure were bad enough, the Fabric controller would need to rebuild your instance. When that happens, your code is moved to an entirely different server, so the need would arise to re-set those permissions. Also, depending on the changes, the E:\ could all of a sudden need to be the F:\ or X:\ drive and you wouldn't know.
Its much better to pretend (at some level) that your application is running "in Azure" and not "on a server in azure", so you make no assumptions about the hosting environment. So anything you need outside of your code (data, logs, audits, etc) should be stored somewhere you can control (Azure Storage, external call-out, etc)
I guess this is kind of a programming question, because I'm going to write a program if this doesn't exist.
So I found a very cheap web-host (I don't really care about the actual web hosting). They will give me a domain name and ftp server with a ton of storage space. Anyway, I want to backup a few hundred gigs of data (mostly family photos and scans of important documents). I also want to backup any future family photos / documents. I don't care if everything on my local NAS dies in a fire, I just want to have the photos and important documents backed up off-site.
So I want some program that lets me select folders locally and schedules them to be backed up to the ftp server. I'm a bit of a security nut, so i'd like the files to be encrypted locally before being transferred up onto the server.
I know I can do this with truecrypt volumes, but I don't want to transfer an entire encrypted volume blob up to the server ever time I change a file in it. So I could do multiple true crypt volumes but that will be a pain to manage
Also this must be mac/linux compatible although I'll primarily be on linux.
I basically need rsync + truecrypt + cron + sftp all rolled into a cryptographically secure program.
I've been searching for days with no luck. Any ideas?
mozyBackup does this - it doesn't use FTP, it has a custom uploader.
ps. Remember a typical home ADSL connection only does about 1Gb/day upstream
Linux option.
Out of the box option probably duplicity ( for example see http://www.howtoforge.com/creating-encrypted-ftp-backups-with-duplicity-and-ftplicity-on-debian-lenny )
Otherwise if these are basically rarely changed archive copies of files - I would roll my own gnupg (or dpad) individual file encryption, a file changed script, and ftp or rsync.