I'm installing the latest MMASetup-AMD64.exe and want to hook up to Log Analytics AND SCOM. But, I'm having trouble finding the command line parameters for SCOM. Does anybody know them? The Log Analytics ones are well documented and are here:
ADD_OPINSIGHTS_WORKSPACE=1
OPINSIGHTS_WORKSPACE_ID="1234"
OPINSIGHTS_WORKSPACE_KEY="5678"
I need the equivalent parameters for management group name and management server. Effectively completing these boxes but via the command line.
Thanks in advance.
I believe parameters for management group, secure port, etc. are not available with MMASetup-AMD64.exe. Here are the supported command line options with it. So may be, if its feasible in your environment and setup, try to use MOMAgent.msi to install agent manually or to deploy System Center Operations Manager agents from the command line or by using the Setup Wizard. The parameters like MANAGEMENT_GROUP, SECURE_PORT, etc. are all explained along with examples in this document. For more information, please refer it.
Other references related to OM agents and OM groups:
Process manual agent installations
Configuring Windows agents
Operations Manager agents
Creating and managing groups
Connecting management groups in Operations Manager
Planning a Management Group Design
Related
I am running a windows 2016 server, we are running IIs 10 on it and i need to be able to assert if there is an AppPool setup before i deploy a website. If it doesn't exist i need setup the AppPool with a specific user and password.
All of this is done using a release agent through Azure Devops.
The agent is running as a NON-ADMIN, and i all accounts involved are running as NON-ADMIN. I have no intention at all to run any admin accounts, for security reasons i want to give least privildges to all accounts involved.
when i try to set up a AppPool using appcmd.exe i get the error msg:
KeySet does not exist.
When running everything as admin it works (and i have absolutely no intention in running any of this as admin).
What i have tried:
i have added the non-admin account to the IIS_IUSRS group.
Made sure that the user has read permissions to the file: 76944fb33636aeddb9590521c2e8815a_GUID in the %ALLUSERSPROFILE%\Microsoft\Crypto\RSA\MachineKeys folder.
i have tried everything here: Error when you change the identity of an application pool by using IIS Manager from a remote computer
anyone that actually knows the cause of this problem?
UPDATE:
Microsoft clearly recommends that agents should be run using service accounts, which i am doing and i have no interest in giving build agents administrative rights to 1000s of servers when they clearly don't need that kind of powers actually. I want to restrict their powers to only be allowed to do what they need to do. I can't believe that giving everything admin is apparently the norm.
After a lot of googling, and i mean A LOT. I managed to solve this. And let me say, that it baffles me that "least privileged accounts" is not common practice in the Microsoft and windows world.
I found this excellent post by InfoSecMike locking down azure devops pipelines.
And we both have the exact same requirements and opinions on this topic.
You CLEARLY don't need admin rights to update IIs configurations (because that would be insane, right!?). The IIs configuration API does not care what rights you have, what you do need is access to certain files. But this is not documented. Microsoft themselves, just for simplicity, tells you that you need to be admin, and buries all the details really deep in documentation when this should be best practice. Also what amazes me is that no one questions it.
What you need is the following:
full access to C:\Windows\System32\inetsrv\Config
full acccess to C:\inetpub
read access to three keys in C:\ProgramData\Microsoft\Crypto\RSA\MachineKeys\
6de9cb26d2b98c01ec4e9e8b34824aa2_GUID (iisConfigurationKey)
d6d986f09a1ee04e24c949879fdb506c_GUID (NetFrameworkConfigurationKey)
76944fb33636aeddb9590521c2e8815a_GUID (iisWasKey)
The 2 first bullet points can be obtained if you make sure your service account is a member of the group IIS_IUSRS.
This group will not give you access to the keys. You need to manually give read permissions to these 3 keys to the agent user.
If you don't give access to these keys you will get the obscure error message
Keyset does not exist ( exception from HRESULT : 0x8009000D)
Which is an incorrect error if you ask me as it should be an IllegalAccessException with proper reason telling you that you don't have access to read the key because the keys are there, they do exist (nice code microsoft, maybe you should open source this so we can fix).
I'll leave with this quote from infosecmike.
The goal was to lock down the permissions of the Azure Pipeline Agent {...}. I started Googling, pretty sure I would find a way to achieve this goal. I didn’t. It’s surprising to not find an answer about this. It seems like the principle of least privilege does not apply anymore in a devops world.
This is why i prefer Linux over Windows. This is a simple task there.
Following parameters should be set for OBIEE Presentation Server only during load testing.
OBIPS\instanceconfig.xml
save and exit file Restart OBIEE processes using OBIEE EM console.
<ServerInstance>
[...]
<Cursors>
<NewCursorWaitSeconds>36000</NewCursorWaitSeconds>
<OldCursorWaitSeconds>36000</OldCursorWaitSeconds>
</Cursors>
[...]
</ServerInstance>
You do know that this represents a value of 10 hours, correct? You are willing to lock resources for that length of time? This is counterintuitive for optimal application performance as you would seek to recover resources as fast as possible to support more sessions versus locking a resource for an extended period of time.
I refer to the following performance "compass rose" as a guiding item (independent of tool)
If you need to amend the file on a remote server you can do this either via OS Process Sampler or via SSH Command Sampler. The first one is a part of JMeter installation, the second one you can install using JMeter Plugins Manager
See How to Run External Commands and Programs Locally and Remotely from JMeter for more information, example configuration and sample commands.
I have installed OEM 13c and deployed a couple of agents and want to test out the Log File Monitoring utility. I have enabled it and added a log file to monitor.
When I go and test it out, it does not show any alerts when they are put into the Log File. On the agent server, I have tailed the file and see the messages coming into the log file.
Does anyone have experience adding log files to OEM? I could have configured it wrong. Or is there any troubleshooting steps that I can follow to see if the server is even contacting the agent for reading the log file. Status of the agent is good with no incidents.
Without access to the system, it would be difficult to tell you the exact cause of this issue. However, I can list a few potential causes of this issue that I have experienced personally:
Permissions. The Oracle Enterprise Manager Agent is very convoluted when it comes to system permissions within a remote server. The agent can be owned and run as any number of users but during metric evaluation, may also need sudo or pam-authentication permissions to access certain entities on the server. Depending on the authentication profiles on that server, this could be the cause of your issue. There are ways to grant the agent access through the PAM stack if that is necessary.
Syntax. The wildcard syntax in the OEM GUI can be a little confusing as well. I would play with the wildcard elements a bit on the "String" component to ensure that it isn't as simple as adding wildcards to the beginning and end of the string. Without diving into the binaries of the agent plugins, it is difficult to assess exactly how the agent is evaluating this particular metric
One suggestion I would have is to go through the agent commands. There are specific commands you can run to manually force an agent to evaluate a particular metric for a particular target. This can allow you to manually trigger the metric collection locally on the server and evaluate what exactly is being performed at the agent level.
On the system I was running (12c) the command was as follows:
emctl control agent runCollection <hostname>:host host_storage
The scenario is as follows:
I have TeamCity set up to use AWS EC2 hosts running Windows Server 2012 R2 as build agents. In this configuration, the TeamCity agent service is running as SYSTEM. I am trying to implement FastBuild as our new compilation process. In order to use the distributed compilation functionality of FastBuild, the build agent host needs to have access to a shared network folder. Unfortunately, I cannot seem to give this kind of access from one machine to another.
To help further the explanation, I'll use named examples. The networked folder, C:\Shared-Folder, lives on a host named Central-Host. The build agent lives on Builder-Host. Everything is running Windows Server 2012 R2 on EC2 hosts that are fully network permissive to each other via AWS security groups. What I need is to share a directory from Central-Host so that Builder-Host can fully access it via a directory structure like this:
\\Central-Host\Shared-Folder
By RDPing into both hosts using the default Administrator account, I can very easily set up the network sharing and browse (while on Builder-Host) to the \\Central-Host\Shared-Folder location. I can also open up the command line and run:
type NUL > \\Central-Host\Shared-Folder\Empty.txt
with the result of an empty text file being created at that networked location.
The problem arises from the SYSTEM account. When I grab PSTOOLS and use the command:
PSEXEC -i -s cmd.exe
I can test commands that will be given by TeamCity. Again, it is a service being run as SYSTEM which, I need to emphasize, cannot be changed to a normal User due to other issues we have when using TeamCity agents under the User account type.
After much searching I have discovered how to set up Active Directory services so that I can add Users and Computers from the domain but after doing so, I still face access denied errors. I am probably missing something important and I hope someone here can help. I believe this problem will be considered "solved" when I can successfully run the "type NUL" command shown above.
This is not an answer for the permissions issue, but rather a way to avoid it. (Wanted to add this as a comment, but StackOverflow won't let me - weird.)
The shared network drive is used only for the remote worker discovery. If you have a fixed list of workers, instead of using the worker discovery, you can specify them explicitly in your config file as follows:
Settings
{
.Workers =
{
'hostname1' // specify hostname
'hostname2'
'192.168.0.10' // or ip
}
... // the other stuff that goes here
This functionality is not documented, as to-date all users have wanted the automatic worker discovery. It is fine to use however, and if it is indeed useful, it can be elevated to a supported feature with just a documentation update.
I need to use PerfMon to collect data from several machines, and I need to be able to turn collection on/off at certain times. I've got all the data points configured on each machine; I just need to start/stop PerfMon, and to start/stop collection of a set of data points.
For reasons I won't go into, I can't simply configure all collection from a single PerfMon instance on a single machine - I need to start/stop PerfMon data collection on multiple machines at (about) the same time.
The systems involved are all running Windows 2003 Server, and I'm unable to install any additional software on the systems.
Is it possible to do this using e.g. PowerShell (or something else that's normally installed on Windows 2003 servers)?
Take a look at logman.exe. You can use it to create countersets (if you already have a template definition) as well as to start/stop perfmon data collection. See this Overview of Performance Monitor for some information on security requirements of the account executing logman.exe.
From .bat, MSBuild or Nant you can do something like:
Logman start [logname] -s [computername]
or
Logman stop [logname] -s [computername]
Once you've collected all those log files you can use relog.exe to import them into a sql instance so that you can more easily query/report against them.
I know you mentioned you can't install any additional software, but...depending on the setup of your lab, or other environment, you might want to consider having perfmon log to a sql data store. Even if its a Sql Express instance running on a server in the environment it might make your life easier. At least it could/would skip the importing of the data into a single store to make it easy to query/analyze.
Good luck!
Z