Scenario
Customers are provided with a server-client solution to accomplish some business-related task. There is a central server installed on a respective machine. Clients are installed on individual machines of users of the software.
The server uses PostgreSQL and stores serialized data as well as media on the designated server-machine.
A related company has experienced a ransomware attack in the past 6 months and we are worried this scenario might also hit our customers. These customers supposedly implemented some security measures, such as a RAID setup, but we remain unconvinced based on prior communication. Even though this is a problem outside our scope of responsibility, adverse effects resulting from a possible attack are likely to affect us as well. This is why I am looking to at least increase security for their database wherever possible.
Question
Given that scenario, one small tweak to their server-system is to enable Windows protection for the folders related to their database.
This guide describes how to activate this function using Windows UI:
https://www.isumsoft.com/windows-10/how-to-protect-files-folders-against-ransomware-attacks.html
I would like to accomplish this without relying on the customer's sysadmins, using our NSIS-based installers only. Therefore my resulting question is - can additional protected folders be declared via registry manipulation? If not, is there a different way to achieve this?
There is a PowerShell API, see "Customize controlled folder access":
Set-MpPreference -EnableControlledFolderAccess Enabled
Add-MpPreference -ControlledFolderAccessProtectedFolders "<the folder to be protected>"
Add-MpPreference -ControlledFolderAccessAllowedApplications "<the app that should be allowed, including the path>"
Related
We are using Turbo.net for Publishing applications. One of this application (designed by our own Company) uses a Broadcast to find devices in the Network and then get a reply by a dynamic UDP Port (30000 - 50000). Opening all These ports on the Windows Firewall is not an Option.
I have therefore tried to specify the exe file in the Windows Firewall. That works but the Problem is, I Need to do this for 200 users. So I want to do this by GPO. Unfortunately the path to the exe is something like this:
%userprofile%\AppData\Local\Spoon\Servers\apps.elpro.com\Users\Firstname.Lastname.Domain\Sandboxes\ECOLOGPROModuleConfigurator__1-4-8-420__en-us__Default__AnyCpu\local\stubexe\0x4D80DB43F65B57C8\ PROModuleConfigurator.exe
The problem is "\Firstname.Lastname.Domain\". I was not able to find a way to use a wildcard for this in the Windows Firewall.
It seems that Windows-Firewall does not allow Wildcards.
Is there an easy fix for this or do I Need to script something and if how?
Thank you!
The fact that it can handle %userprofile% tells you that it's okay with Windows variables, so the thing to do would be to set up more such variables, to pass this path as %userprofile%\AppData\Local\Spoon\Servers\apps.elpro.com\Users\%Firstname%.%Lastname%.Domain\...
Sorry there's not a copy-paste solution for you. It would take some scripting on your end to pull this name data out of Active Directory (or some Linux/Unix LDAP server – whatever your organization is using) and fill these variables on a per-user basis. On the up-side, the variables could have other uses once you get them set up, like naming backup directories on a NAS in %Lastname%, %Firstname% format, and so on.
Exactly how to do this will vary by coding language, by OS version, and by directory service type. The information about this is scattered far and wide, so you'll have to search around a bit. E.g., for how to get an AD user's real names with C# under dotNet 3.0+, see this StackOverflow thread. And there are lots of SO threads with info on using Get-ADUser in Powershell to find and filter by user's IDs and names. This thread on SpiceWorks might also be of interest.
You'll almost certainly need Remote Server Administration Tools (RSAT) for Windows (see that page for installation details, which are totally different depending on OS version, even within Windows 10!). Tools that deal with ActiveDirectory need the AD stuff in RSAT to do their work, including both Powershell and C#.Net. RSAT requires Windows Pro or Enterprise (on the machine you're going to use to do the AD work; user workstations can be any version). But AD itself requires Windows Server.
This is only going to be doable with an Active Directory or other LDAP server, in which this user firstname/lastname information, as such, is even stored. Local accounts do not have this information at all except when they inherit it in munged "full name" form, e.g. from Microsoft.com account credentials. In Powershell, you can run 'Get-LocalUser | Select *', or follow the more "deep dive" local-ADSI method demonstrated here, and you'll find no first and last name data. It's just not part of an account, absent some systemic means (AD, or Microsoft online account connection, or Microsoft Family Group management, etc.) of injecting it. There are multiple ways of manually adding "full name", but even doing this across a bunch of users probably would not help you, since human names are not easily software-parseable into first name and last name (Many people have two last names, and many have two or more given names; so what is "Pat Morgan Otero"? And of course given-name versus family-name order varies culturally.) There appears to be no way to add separate first and last name fields to local accounts; tools like Set-LocalUser cannot do it.
[aside]There's no connection between Windows user data and Windows Subsystem for Linux user data (even the usernames can be different), so that's no help. If you have a network-wide unified user ID system via LDAP or whatever, and it has an end result of everyone's user IDs and their real names being in account information under any Linux/Unix system on your network (print server, NAS, anything you can get privileged shell access to), then you might have an easier go of it, given the text-processing tools available to bash in Linux/Unix (including macOS), like grep and sed and awk. All you'd need is a command-line tool for accessing LDAP (or whatever) to run directory queries, then parse the results for name information. Or that name info might even already exist in that Linux box's passwd file. This was how I did something similar for one client, but it was a Linux-heavy shop. If you have any (or most) users isolated from Linux in a Windows-only sphere of users, then this approach would not work.[/aside]
It looks like accessing AD data (or LDAP, whatever) in Windows with Windows-based scripting/programming is the only certain way to do what you want to do. Even then, it will only work if the data is present and correct. You'd need group policy that doesn't permit people to change their names (e.g. by removing their surname) once their account is configured, and human procedural rules that admins must enter this data when setting up accounts, and that it be correct and complete (not missing surname, and not be placeholder or role data that might be substituted out later or might even occur on multiple machines).
PS: Ultimately, I think you should write to the creators of that software and ask them to stop using first and last names in paths, as it breaks the administrability of their product.
I'm working on a Win32 based document management system that employs an automatic check in/check out model. The model it currently uses for tracking documents in use (monitoring the processes of the applications that open the documents) is not particularly robust so I'm researching alternatives.
Check outs are easy as the DocMgt application is responsible for launching the other application (Word, Adobe, Notepad etc) and passing it the document.
It's the automatic check-in requirement that is more difficult. When the user closes the document in Word/Adobe/Notepad ideally the DocMgt system would be automatically notified so it can perform an automatic check in of the updated document.
To complicate things further the document is likely to be stored on a network drive not a local drive.
Anyone got any tips on API calls, techniques or architectures to support this sort of functionality?
I'm not expecting a magic 3 line solution, the research I've done so far leads me to believe that this is far from a trivial problem and will require some significant work to implement. I'm interested in all suggestions whether they're for a full or part solution.
What you describe is a common task. It is perfectly doable, though not without its share of hassle. Here I assume that the files are closed on the computer where your code can run (even if the files are stored on the mounted network share).
There exist two approaches to controlling the files when they are used: the filter and the virtual filesystem.
The filter sits in the middle, between the process and the filesystem (any filesystem, either local, network or fully virtual) and intercepts file requests that go to this filesystem. Here it is required that the filter code is run on the computer, via which the requests are passed (this requirement seems to be met in your scenario).
The virtual filesystem is an endpoint for the requests that come from the applications. When you implement the virtual filesystem, you handle all requests, so you always fully control the lifetime of the files. As the filesystem is virtual, you are free to keep the files anywhere including the real disk (local or network) or even in the cloud.
The benefit of the filter approach is that you can control individual files that reside on the real disks, while the virtual filesystem can be mounted only to the new drive letter or into the empty directory on the NTFS drive, which is not always fisible. At the same time, sitting in the middle, the filter is to some extent more restricted at what it can do, and the files can be altered while the filter is not running. Finally, filters are more complicated and potentially error prone, as they sit in the middle and must play nice with other filters and with endpoints.
I don't have specific recommendations, but if the separate drive letter is an option, I would recommend the virtual filesystem.
Our company developed (and continues to maintain for the new owner) two products, CBFS Filter and CBFS Connect, which let you create a filter and a virtual filesystem respectively, all in the user mode. Those products are used in many software titles, including some Document Management Systems (which is close to what you do). You will find both products on their website.
I need to deploy an application onto some Windows machines for purposes of data collection from a group of people (i.e. the application will be used to gather responses to a series of survey questions). The process is interactive, alternating between displays of text and images with specific timing requirements. I have put together a prototype application using HTML and JavaScript that implements the survey. However, there are some unique constraints on the deployment environment that have me stuck:
While the machine is Internet-connected, the client requires that the survey application must run fully local to the PC that it runs on. Therefore, sending the survey results to a remote server is not permissible. Obviously, saving to a local file from a Web browser is typically not permitted for security reasons.
Installation of applications onto the machines that will run the survey is not permitted.
The configuration of the machines is not known specifically a priori, but I can assume some recent version of Windows with IE8+.
The "no remote access" requirement was a late comer, and has thrown a wrench into the plan of just writing a simple Web application that could post results to an HTTP server. I'm now looking for the easiest way forward. Two main approaches come to mind:
Use a GUI framework that provides a control that can display HTML/JavaScript; running a full-blown application on the PC would allow me to save the results to the filesystem. I've never done this, but it seems like in this day and age it shouldn't be too difficult. This would allow me to reuse much of my existing prototype implementation, but I would need some way of transferring the results (which would be stored in a JavaScript data structure) outside of the Web control to where the rest of the application could access it.
Reimplement the entire application using some GUI framework (I've used PyQt successfully before, although not on Windows). This approach is obviously less desirable than #1 due to the lack of reuse. However, it may be necessary if #1 isn't feasible.
Any recommendations for the best way to go? Ideally, I'm looking for a solution that can be run in a "portable" manner from a USB thumbdrive or similar.
Have you looked at HTML Applications (HTA)? They work in IE5+ and can use Windows Scripting Host to write to local drives and UNC shares...
Maybe you can use a portable web server with a scripting language on the server side. http://code.google.com/p/mongoose/ Mongoose, for example, you can run PHP, CGI, etc. .. scripts. Then, simply create a script to save a file to your hard drive. And let the rest of the application in the same manner.
Use a script to start the web server, and perhaps a portable web browser like K-Meleon to start the application http://kmeleon.sourceforge.net/ This is highly configurable. Or start the system explorer to your localhost URL.
The only problem may be that the user has to modify the firewall for the first time you run the server?
We're building an application designed to run on Windows-based servers. One of the considerations we're looking into at the moment is how to control access to the application's GUI, which allows configuration and controls the "back end" services.
In order to secure the application properly, there are several objects which will need ACLs to be applied - files, directories, Registry keys, named pipes, services etc. We need to provide administrators with some way to configure those ACLs in order to limit access to authorized users only.
One approach we have considered is to create a tool which can modify the ACLs on all those objects simultaneously, but that would be a fair chunk of work and could be fragile.
The other possible approach we're looking at is to create a custom group (e.g. "My App Users") so we can give that group the appropriate rights to each object. This means that administrators will be able to add/remove authorized users by using familiar Windows group membership tools.
So: is creating groups at install time an acceptable thing to do, or is it likely to upset administrators? I'm more familiar with the UNIX world, to be honest, where server-based apps are more or less expected to create groups, but I'm uncertain of the etiquette in the Windows ecosystem.
Also: is there a better solution to this that I've missed?
Thanks in advance!
The question is twofold - one technical, and one political. Technically a local group is fine, you can add AD or domain users into a local group and everyone's happy. In terms of whether an app should be messing with a server's security 'stance', the only reasonable answer is to pop up some kind of request telling the user what you are going to do and asking permission (make sure you also document the decision in some kind of log or entry). This also addresses everybody's legal a$$ eg if they click "no, leave my app unsecured" and get hacked).
Taking a UNIX approach, you could tell the user what you need, suggest a local group (and give the user the chance to pick another local or domain/AD group). Take a look at how (eg) Oracle installs on UNIX do it.
Since this is a server app and you might have to support silent/unattended install, make sure that the behavior can be specified in the install script and very, very sure that the behavior of the script is documented so that no one installs the program without realizing the change in security policy that the installer implements.
I think it's perfectly fine to create a local group for this purpose.
Furthermore I have not been able to come up with a better solution after giving it some thought.
Depending on the size of the implementation, groups could be the way to go.
But please keep in mind that the relevant ACLs on directories and the registry ought to be set. I do agree that setting them once to the group and then let access control be maintained by group memberships.
In regards to klausbyskov's answer, I think a local group could be fine, but consider using LDAP instead. From a security perspective you would detach the authentification process and let the Directory handle it; using kerberos.
In what way is the Windows registry meant to be used? I know it's alright to store a small amount of user preferences, but is it considered bad practice to store all your users data there? I would think it would depend on the data set, so how about for small amounts of data, say, less than 2KB, in 100 or so different key/value pairs. Is this bad practice? Would a flat file or SQLite db be a better practice?
I'm going to take a contrarian view.
The registry is a fine place to put configuration data of all types. In general it is faster than most configuration files and more reliable (individual operations on the registry are transacted so if your app crashes during a write the registry isn't corrupted - in general that isn't the case with ini files).
Marcelo MD is totally right: Storing things like operation percentage complete in the registry (or any other non volitile storage) is a horrible idea. On the other hand storing data like the most recently used files is just fine - the registry was built for just that kind of problem.
A number of the other commenters on this post talking about the MRU list have discussed the problem of what happens when the MRU list gets out of sync due to application crashes. I'm wondering why storing the MRU list in a flat file in per-user storage is any better?
I'm also not sure what the "security implications" of storing your data in the registry are. The registry is just as secure as the filesystem - the registry and the filesystem use the same ACL mechanism to protect their data.
If you ARE going to store your user data in a file, you should absolutely put your data in %APPDATA%\CompanyName\ApplicationName at least - that way if two different developers create an application with the same name (how many "Media Manager" applications are there out there?) you won't have collisions.
For me, simple user configuration items and user data is better to be stored in either a simple XML configuration file, a SQLLite db, or a MS SQL Server Compact db. The exact storage medium depends on the specifics of the implementation.
I only use the registry for things that I need to set infrequently and that users don't need to be able to change/see. For example, I have stored encrypted license information in the registry before to avoid accidental user removal of the data.
Using the registry to store data has mainly one problem: It's not very user-friendly. Users have virtually no chance of backing up their settings, copying them to another computer, troubleshooting them (or resetting them) if they get corrupted, or generally just see what their software is doing.
My rule of thumb is to use the registry only to communicate with the OS. Filetype associations, uninstaller entries, processes to run at startup, those things obviously have to be in the registry.
But data that is for use in your application only belongs in a file in your App Data folder. (whiever one of the 3+ App Data folders Microsoft currently wants you to use, anyway)
As each user has directory space in Windows already dedicated to storing application user data, I use it to store the user-level data (preferences, for instance) there.
In C#, I would get it by doing something like this:
Environment.GetFolderPath( Environment.SpecialFolder.ApplicationData);
Typically, I'll store SQLite files there or whatever is appropriate for the application.
If your app is going to be deployed "in the enterprise", keep in mind that administrators can tweak the registry using group policy tools. For example, if firefox used the registry for things like the proxy server, it would make deployment a snap because an admin can use the standard tools in active directory to set it up. If you use anything else, I dont think such things can be done very easily.
So don't dismiss the registry all together. If there is a chance an admin might want to standardize parts of your configuration across a network, put the setting in the registry.
I think Microsoft is encouraging use of isolated storage instead of the Windows registry.
Here's an article that explains how to use it in .Net.
You can find those files in Windows XP under Documents & Settings\\Local Settings\ App Data\Isolated Storage. The data is in .dat files
I would differentiate:
On the one hand there is application specific configuration data that is needed for the app to run, e.g. IP addresses to connect to, which folders to use for what sort of files etc, and non trivial per user settings.
Those I put in a config file, ini format for simple stuff, xml if it gets more complex.
On the other hand there is trivial per user settings (best example: window positions and layout). To avoid cluttering the config files (which some users will want to edit themselves, so few and clearly arranged entries are a must), I like to put those in the registry (with conservative defaults being set in the app if no settings in the registry can be found).
I mainly do it like istmatt sais: I store config files inside the %APPDATA% folder. Usually in %APPDATA%\ApplicationName, I don't like the .NET default of APPDATA%\CompanyName\ApplicationName\Version, that level of detail and complexity is counterproductive for most small to medium sized applications.
I disagree with the example of Marcelo MD of not storing recently used files in the registry. IMO this is exactly the volatile sort of user specific information that can be stored there.
(His example of what not to do is very good, though!)
To me it seems easier to think of what you should NOT put there.
e.g: dynamic data, such as an editor's "last file opened" and per project options. It is really annoying when your app loses sync with the registry (file deletion, system crash, etc) and retrieves information that is not valid anymore, possibly deadlocking the user.
At an earlier job I saw a guy that stored a data transfer completness percentage there, Writing the new values at every 10k or so and having the GUI retrieve this value every second so it could show on the titlebar.