Performance counter permissions: admin not required? - windows

Can someone clarify for me exactly what permissions are required to read from and write to performance counters? I'm not seeing that I need any kind of special permissions to read/write performance counters and that's contrary to most resources I've found.
A training that I took, as well as most resources on the web, indicate that managing performance counters (creating, deleting) requires admin permissions, and reading/writing requires admin or membership in the local Performance Monitor Users group. I've verified the first, but as for reading/writing, I don't seem to have any problems doing this on my Win8.1 machine as a non-admin user. I can read perf counters using perfmon, PowerShell and the .NET API and write to custom perf counters using the .NET API, all as a non-admin.
Has this changed across OS versions? Or perhaps something in my corporate domain policy allows for it?

Only non-interactive logon sessions require the user to have membership in the Performance Monitor Users or Administrator groups in order to read performance counters.
Note that I am not a Microsoft employee, and I have not found any documentation that provides an authoritative statement of this behavior. I only determined this behavior through my own testing.
Specifically, when logging on with LogonUserEx, if the logon type is LOGON32_LOGON_NETWORK, LOGON32_LOGON_NETWORK_CLEARTEXT, LOGON32_LOGON_BATCH, or LOGON32_LOGON_SERVICE, then membership in one of the previously mentioned groups is required in order to read performance counters. However, if logging on with LOGON32_LOGON_INTERACTIVE or any of the other miscellaneous login types listed in the LogonUserEx documentation, then membership in one of the previously mentioned groups is not required in order to read performance counters.
Additionally, I determined that Vista RTM did enforce this restriction for interactive login sessions, and the current relaxed state for interactive login sessions was introduced in Vista SP1. While there are practically no users running Vista RTM today, this is good context to keep in mind if you read documentation or other advice that may have been written back then (or more recent advice that may have been blindly copied from older advice).

Related

Setting protected folders e.g. via registry manipulation

Scenario
Customers are provided with a server-client solution to accomplish some business-related task. There is a central server installed on a respective machine. Clients are installed on individual machines of users of the software.
The server uses PostgreSQL and stores serialized data as well as media on the designated server-machine.
A related company has experienced a ransomware attack in the past 6 months and we are worried this scenario might also hit our customers. These customers supposedly implemented some security measures, such as a RAID setup, but we remain unconvinced based on prior communication. Even though this is a problem outside our scope of responsibility, adverse effects resulting from a possible attack are likely to affect us as well. This is why I am looking to at least increase security for their database wherever possible.
Question
Given that scenario, one small tweak to their server-system is to enable Windows protection for the folders related to their database.
This guide describes how to activate this function using Windows UI:
https://www.isumsoft.com/windows-10/how-to-protect-files-folders-against-ransomware-attacks.html
I would like to accomplish this without relying on the customer's sysadmins, using our NSIS-based installers only. Therefore my resulting question is - can additional protected folders be declared via registry manipulation? If not, is there a different way to achieve this?
There is a PowerShell API, see "Customize controlled folder access":
Set-MpPreference -EnableControlledFolderAccess Enabled
Add-MpPreference -ControlledFolderAccessProtectedFolders "<the folder to be protected>"
Add-MpPreference -ControlledFolderAccessAllowedApplications "<the app that should be allowed, including the path>"

Why the default authentication Hadoop is unsecured?

I saw some article says that the default authentication Hadoop is unsecured , as "In the default authentication Hadoop and all machines in the cluster believe every user credentials presented.". for example in the article https://blog.eduonix.com/bigdata-and-hadoop/learn-secure-hadoop-cluster-using-kerberos-part-1/, I still can not understand why this would happens , doesn't the Linux OS is not capable of validating the credentials,? is there anyone can provide a detail example to explain it ?
Assuming it did use "Linux OS" way of validating credentials, there's no guarantees that every node in the system has the same credentials without tools external to the Hadoop project. Out of the box with access controls enabled, then HADOOP_USER_NAME environment variable is checked to anything you access, but it's just a plain string and easily overridden.
ACLs do work similarly to the Unix users and groups, but even then, a "bob" user on my machine shouldn't be seen the same as any other machine, even if I login to that account on them.
That's where external systems of LDAP/AD/Kerberos come into play, which were used before Hadoop security became a major concern. And those allow centralized user & access management outside of just Hadoop
the theory is that it is much easier to effectively secure a small set of limited-use machines, rather than a large set of heterogeneous, multipurpose servers and workstations over which the administrator may have little control.
https://docstore.mik.ua/orelly/networking_2ndEd/ssh/ch11_04.htm

Running an untrusted application on Linux in a sandbox

We have a device running Linux and we need to run untrusted applications on this. We are trying to alleviate the following security concerns -
The untrusted application should not be able to adversely affect the core OS data and binaries
The untrusted application should not be able to adversely affect another application's data and binaries
The untrusted application should not be able consume excessive CPU, memory or disk and cause a DoS/resource starvation like situation to the core OS or the other applications
From the untrusted application standpoint, it only needs to be able to read and write to its own directory and maybe the mounted USB drive
We are thinking of using one of the following approaches -
Approach 1 - Use SELinux as a sandbox
Is this possible? I have read a bit of SELinux and it looks a bit complicated in terms of setting up a policy file and enforcing it at runtime etc. Can SELinux do this and restrict the untrusted application to just read/write its own directory and also be able to set quota limits?
Approach 2 - Create a new sandbox on our own
During install time
Create a new app user for each untrusted application
Stamp the entire application directory and files with permissions so that only the application user can read and write
Set quotas for the application user using ulimit/quota
During run time, launch the untrusted application using
Close all open file descriptors/handles
Use chroot to set the root to the application directory
Launch the application under the context of the application user
Thoughts on the above? Which approach is more secure than the other? Is there another approach that might work out better? We do not have a choice to move Android due to some reasons so we cannot use the sandboxing features that Android provides natively...
Let me know
Thanks,
The SELinux is a set of rules that are applies a bit similar as user rights even more complex. You can use it for each process to set a domain of that process and allow or deny nearly any access. It means access to files, network or processes/threads. That way it can be used as a kind of sandbox. However you have to prepare a rule set for each process or you can make a script that has to be run before sandboxed application to prepare rules itself.
If you want to take control on CPUs consumption, the SELinux has not a CPU planner because any rules have just one of two logical results 'allow' or 'deny' access. I recommend you 'cgroups' to control CPUs consumption.
The legato project uses a higher level sandboxing. It uses chroot and bind mount to contain applications. A key feature of it is a formal declarative api thus application components can talk to system service components under a managed security configuration. And services and applications can be added and removed as needed, as well as updated over the air. The application memory usage, processor share, storage, etc are also closely managed. It claims to make application development easier.

What is the difference between multi-tenancy and multi-user solutions?

I believe I understand this in terms of hardware, where multiple individuals 'share' the same processing and memory for their solutions. But I've been looking at gmail and facebook, are those multi-tenanted solutions? Is it that as long as my solution can support multiple users, its multi-tenanted?
You can read this post concerning your question.
Multi-tenant vs multi-user
Any system may have multiple users. In a
multi-user system multiple users can
use the application (e.g. Exact
Synergy). The term multi-user does not
imply anything for the architecture of
the system. On the other hand, while a
multi-tenant system is a multi-user
system, multi-tenancy tells us
something about the architecture of
the system: namely that multiple users
share the same application and
database instance. Note that it is
possible to have a multi-user system,
which is not multi-tenant.

Is it acceptable for a server-based application installer to create a new group?

We're building an application designed to run on Windows-based servers. One of the considerations we're looking into at the moment is how to control access to the application's GUI, which allows configuration and controls the "back end" services.
In order to secure the application properly, there are several objects which will need ACLs to be applied - files, directories, Registry keys, named pipes, services etc. We need to provide administrators with some way to configure those ACLs in order to limit access to authorized users only.
One approach we have considered is to create a tool which can modify the ACLs on all those objects simultaneously, but that would be a fair chunk of work and could be fragile.
The other possible approach we're looking at is to create a custom group (e.g. "My App Users") so we can give that group the appropriate rights to each object. This means that administrators will be able to add/remove authorized users by using familiar Windows group membership tools.
So: is creating groups at install time an acceptable thing to do, or is it likely to upset administrators? I'm more familiar with the UNIX world, to be honest, where server-based apps are more or less expected to create groups, but I'm uncertain of the etiquette in the Windows ecosystem.
Also: is there a better solution to this that I've missed?
Thanks in advance!
The question is twofold - one technical, and one political. Technically a local group is fine, you can add AD or domain users into a local group and everyone's happy. In terms of whether an app should be messing with a server's security 'stance', the only reasonable answer is to pop up some kind of request telling the user what you are going to do and asking permission (make sure you also document the decision in some kind of log or entry). This also addresses everybody's legal a$$ eg if they click "no, leave my app unsecured" and get hacked).
Taking a UNIX approach, you could tell the user what you need, suggest a local group (and give the user the chance to pick another local or domain/AD group). Take a look at how (eg) Oracle installs on UNIX do it.
Since this is a server app and you might have to support silent/unattended install, make sure that the behavior can be specified in the install script and very, very sure that the behavior of the script is documented so that no one installs the program without realizing the change in security policy that the installer implements.
I think it's perfectly fine to create a local group for this purpose.
Furthermore I have not been able to come up with a better solution after giving it some thought.
Depending on the size of the implementation, groups could be the way to go.
But please keep in mind that the relevant ACLs on directories and the registry ought to be set. I do agree that setting them once to the group and then let access control be maintained by group memberships.
In regards to klausbyskov's answer, I think a local group could be fine, but consider using LDAP instead. From a security perspective you would detach the authentification process and let the Directory handle it; using kerberos.

Resources