Can a Windows service install another Windows service? - windows

I am having trouble when I have one Windows service try to install another Windows service.
Specifically, I have a TeamCity agent running tests for me on a Windows 2008 AWS instance. The tests are written in Java, which shell out to a .bat script to install a service (let's call it Service A), giving it a unique name each time.
The offending line is in the .bat script: sc create "%serviceName%" binPath= %binPath% DisplayName= "%serviceDisplayName:"=%" start= %serviceStartType%. I believe as long as the service name is unique that should work.
And indeed it does work if I run the tests manually on the command line, using an administrator account. Service A is installed, the test completes and Service A is uninstalled at the end.
I have tried running the TeamCity agent as LocalSystem, as Administrator, and as another user that is member of the administrators group. I have also tried disabling UAC completely.
Presumably the problem is access denied type errors, although that is not clear at this point. There are a few avenues to explore still, but it is a simple question really: are processes running as services forbidden from installing other services? Are there special things I have to do to configure the machine/ account to allow it to do this?
The point of the test it to install and use Service A, so workarounds are not relevant - Service A must be operated as a black box.
Thanks!

There are no restrictions on creating services with regards to how the creating process can execute, as long as the process has the appropriate permissions. That is to say, a process could be running as a service and create another service -- the only consideration here is the appropriate permission level.
The problem that often occurs with running batch scripts from within processes (as opposed to directly through user input on the command line) is that the environment expected isn't always the environment that is loaded. In this case, it appears that the env variables referred to in the batch script weren't properly set when running as a service, which of course then caused the service install failure. Correcting the environment loaded when the batch script is shelled out is the correct solution here.

Related

Jenkins Job (In Windows environment) not able to access Shared locations

I am trying to schedule a batch in Jenkins (Windows environment) for Windows EXE program (Implemented through .NET).
This program refers to some shared location in the network (viz. \shared network.net\sample path) for the sake of reading from and writing into files.
When I run this program independently out of Jenkins, it works fine, as it considers my login as user who actually has access over shared path.
However, when I run it through Jenkins, there is issue over access. Through my program logs I checked and found that it uses 'NT AUTHORITY\SYSTEM' as user.
I need to make Jenkins job run through particular user's authentication, which will have relevant access over shared path.
Please advise.
The Authorize Project Plugin allows you to run a job as a specific user.
Or, if you are executing from a bat script, you should be able to change the user in your script before running your program.
Several options:
Use "net use" to map the network location under the job's session using your credentials.
In your Windows slave you can go to services-> Jenkins slave->properties. there under "Log On" section you can specify the user you want the service to run under.
I would definitely go with the first option as it is much more manageable (tomorrow you'll replace your slave and have to do it all over again, instead of just migrating the job and mapping the session again).
Good Luck!

How do I give a service running as SYSTEM shared directory network access over EC2 hosts running Windows Server 2012?

The scenario is as follows:
I have TeamCity set up to use AWS EC2 hosts running Windows Server 2012 R2 as build agents. In this configuration, the TeamCity agent service is running as SYSTEM. I am trying to implement FastBuild as our new compilation process. In order to use the distributed compilation functionality of FastBuild, the build agent host needs to have access to a shared network folder. Unfortunately, I cannot seem to give this kind of access from one machine to another.
To help further the explanation, I'll use named examples. The networked folder, C:\Shared-Folder, lives on a host named Central-Host. The build agent lives on Builder-Host. Everything is running Windows Server 2012 R2 on EC2 hosts that are fully network permissive to each other via AWS security groups. What I need is to share a directory from Central-Host so that Builder-Host can fully access it via a directory structure like this:
\\Central-Host\Shared-Folder
By RDPing into both hosts using the default Administrator account, I can very easily set up the network sharing and browse (while on Builder-Host) to the \\Central-Host\Shared-Folder location. I can also open up the command line and run:
type NUL > \\Central-Host\Shared-Folder\Empty.txt
with the result of an empty text file being created at that networked location.
The problem arises from the SYSTEM account. When I grab PSTOOLS and use the command:
PSEXEC -i -s cmd.exe
I can test commands that will be given by TeamCity. Again, it is a service being run as SYSTEM which, I need to emphasize, cannot be changed to a normal User due to other issues we have when using TeamCity agents under the User account type.
After much searching I have discovered how to set up Active Directory services so that I can add Users and Computers from the domain but after doing so, I still face access denied errors. I am probably missing something important and I hope someone here can help. I believe this problem will be considered "solved" when I can successfully run the "type NUL" command shown above.
This is not an answer for the permissions issue, but rather a way to avoid it. (Wanted to add this as a comment, but StackOverflow won't let me - weird.)
The shared network drive is used only for the remote worker discovery. If you have a fixed list of workers, instead of using the worker discovery, you can specify them explicitly in your config file as follows:
Settings
{
.Workers =
{
'hostname1' // specify hostname
'hostname2'
'192.168.0.10' // or ip
}
... // the other stuff that goes here
This functionality is not documented, as to-date all users have wanted the automatic worker discovery. It is fine to use however, and if it is indeed useful, it can be elevated to a supported feature with just a documentation update.

Topshelf, NLog and File Permissions

I have a Windows service application that uses the TopShelf library and I'm installing it in AWS during the cfn-init using the handy command line features that you get with topshelf.
C:\handy_service\> HandyService.exe install start
This basically installs the service in the registry and then calls sc start, but it's quite useful because it checks the service name matches what you expect and it allows you to configure the user that service will run as using the nice fluent API.
The installer code also writes some diagnostic logs to NLog if the service is configured to use NLog in general.
The problem is this: the installer runs as the default local administrator account that the AMI starts with and the NLog file gets created by this user. When the service starts up as the Network Service user, it doesn't have permission to write to the NLog log file.
How can I get my service to write to the log file? I've thought about setting the permissions programmatically but it looks nasty and I'd have to determine the log file name as this is generated dynamically based on the ec2 instance id. Also, it's not entirely obvious at what point the log file is first created. The easiest hack that I might go with is having two NLog.configs and switching one out at the end of the install after the logger is flushed. But because there is some overlap in time between the service starting and the installer exiting, I expect I'll lose a few lines of logging here.
Any clean suggestions would be greatly appreciated!
In the end I went with setting the permissions on the logs folder at deploy time. It's actually pretty straightforward with icacls, only a couple of lines in rake for instance, assuming you know where your logs folder is going to be:
sh %{icacls "#{logs_dir}" /grant "#{username}":(OI)(W)}
Not calling UseNLog() in the service config would also be a simple option, any install-time errors would go in the Windows event log in that case.

WMIPRVSE needs to be run under network services by default

I have 2 separate servers (windows server 2008 r2) from where I am running vbs scripts through a microsoft scheduler ( my-computer>manager>Schedule). when I run vbs scripts locally they are working fine, but when it is being run through scheduler one of servers is getting stacked. while the other is working fine. And also I have noticed from task manager that the working server runs the WMIPRVSE.exe though Network Service user and the other one shows SERVICES as user.
How to make sure that WMIPRVSE.exe will always run under Network Services. Thanks
Edit:
I have tried to change the log on user from services, but it failed to start the service than.
There are a few things I have tried, but I don't know which one helped me.
What I did is I granted all permissions to wscript file which is located in system32 somewhere, and after some time it became Network Servies. Again not really sure whether it was because of that change or other thing.

service doesn't behave the same as command line

I am running on a Windows Server 2003. This is my problem:
I wrote a Perl script to automate the copy of some files from my Server machine to some network drives. I am using xcopy to copy the files. My problem is the permissions.
If I run the script from the command line, it works, all the copies are successful.
If I try to run the script using a service all the copies fail. This service is a program that I wrote that takes the script and runs it. In the background all it is doing is to call the C function 'system' and it runs the same program that I can run from the command line.
I have tried many variations of this to figure out what is wrong with it but I can't see why the service would not run the same way I run it from the command line.
I set up the service to run as the same user I am using from the command line.
I also tried to map the network drives as the user that has writing permission but the result is the same. Manually the script works, from the service, it doesn't.
Any suggestion is appreciated.
Thanks
Tony
The service may be running as the system and not have access to the network drives. In the Service settings, change the service to run under your account (or an account with the relevant permissions/mappings).
When the service runs, it uses whatever credentials you specify in the Services manager of Windows. The default, LOCAL SERVICE, probably does not have permission to access the resources to be copied.
Create a new user account with the minimum set of permissions needed to perform the copy and configure your service to run under that account.
I did figure out the issue (I think), and that matches what I later found in another post:
https://serverfault.com/questions/4623/windows-can-i-map-a-network-drive-for-a-service-account
<...Persistent drive mappings are only restored during an interactive login, which the service does not use. I believe the only way to get a service to use a network drive is for that service to map the drive itself or alternatively for it to us a UNC path instead of a mapped drive.>
What I did was mapping the drive using the service and that seems to work. It turns out, if I map the drive and save credentials, then I can access later the drive without having to map it again. I don't know why this approach seems to work though.
-Thanks everybody for your help.
Tony

Resources