Why must Windows Services be Installed? - windows

As far as I understand, the main entry point for a service application calls StartServiceCtrlDispatcher with an array of entries containing (among other things) the entry points for services to be run.
Nothing about that setup specifically requires any kind of installation: So why must a windows service be installed anyway?
I'm assuming it's an access management/security thing, but I can't find anything on the net.

Services are not connected to a specific user and a major selling point of a service is the ability to run when there are no users logged in. If there is no user around to start a service, how would Windows know what to start if there was no central list it could consult?
The list of services is stored in the registry and this is where the service manager gets the list of installed services and their configuration.
Most 3rd-party services only contain one service per .exe and the array passed to StartServiceCtrlDispatcher only contains one service. The famous svchost.exe can have more than one service in a single process and in this design each service is implemented in a .dll that is loaded by svchost.exe.
The svchost.exe design is used by Microsoft to reduce the total number of processes on a system. There will still be multiple svchost processes on a system, one for each configuration type (network access vs local only etc.).
Other configuration details the service manger needs to know about each service includes what action it should take if a service dies and if it should be delay-started etc. These settings are not hardcoded in the service itself so that administrators can change the configuration.

Related

Are Named Pipes subject to security restrictions when confined to the local machine?

I have a Windows Service running on a (large) customer's network. There is a Windows Application that runs on the same host that interacts with the Service to control/configure it. The communications from Application to Service is via named pipes. I've had this working with no problems in my lab or in production service on the customer's network for two years.
I installed the Service and the UI application at a different customer for testing and many of the UI functions do not work. Circumstances and tests point to the named pipe mechanism as a likely culprit.
Is it possible that one EXE or the other could be restricted from creating / using this named pipe?
If there are security concerns/restrictions with named pipes in general I will have to consider using some other means for these applications to talk, but that will involve a lot of engineering time, so I'm still trying to resolve the issue as it stands.

Service dependency on AD client services

I have a Win32 service that runs in an AD environment. Very early in its startup phase, this service now needs to make ADSI calls in order to find out the computer account's group membership. What service dependencies do I have to configure for my service so all necessary AD client services have successfully started beforehand and my ADSI calls can succeed? I already have a dependency on rpcss, because the service implements an rpc server, but past experience has shown to me, that this is not sufficient in order to make successful ADSI calls during system startup.
Any help appreciated,
--
Stefan
• Regarding the ‘Win32’ service class is an object in WMI that represents a service on a computer system running Windows. It is simplified from Managed Object Format (MOF) code and includes all the inherited properties. For more information on configuring, it through its various syntaxes, kindly refer to the below documentation link that describes in detail the syntax for configuring the methods and properties that it supports: -
https://learn.microsoft.com/en-us/windows/win32/cimwin32prov/win32-service
• Also, as you want your Win32 service to find out the computer account's group membership through ADSI, you would have to use collection objects to represent any arbitrary set of items in a directory service that can be represented using the same data type as that of the Win32 service. Collection objects are defined as a set of VARIANT values, representing any of the valid Automation data types. Collection objects can represent both persistent information such as access-control lists and volatile information such as print jobs in a print queue. Groups are simply collections of objects supporting the ‘IADsMembers’ interface. Kindly refer to the documentation below to use the above-stated interface as it will help you to retrieve the information from an AD group: -
https://learn.microsoft.com/en-us/windows/win32/api/iads/nn-iads-iadsmembers
• The services required for the connecting to Active Directory are Kerberos Key Distribution Center (KDC), BITS (Background Intelligence Transfer Service), WMI (Windows Management Instrumentation), RPC (Remote Procedure Call), Background Tasks Infrastructure Service (BTIS), Extensible Authentication Protocol (EAP), Distributed Transaction Coordinator, Netlogon, RPC Endpoint Mapper and Remote Registry. These services are normally required by the client to connect to the AD Services.

Which service account is suitable?

I have developed a .NET Windows Service (in VS2010) that needs to:
Access shared folders (read/write) on machines on the local network
Write to HKLM/SOFTWARE part of the registry
Write files and create folders in all parts of the local file system (ex. in root of C:)
Download files from the web (using http)
My service must do well with all Windows (PC) operating systems, starting from Windows XP SP3 and onwards.
Problem: Which service account should I choose for my service?
Normally, I would use either “LocalService” or “NetworkService”, but none of those grants all needed privileges by themselves.
Should I use the “LocalSystem” account then? Or, should I create a complete separate account for my service's use only (this should then be done automatically during installation)?
For now I use the “NetworkService” account and just adds it to the adimistrators group during installation, which works fine. But I think this approach ruins the whole idea about limited service accounts and thus poses a security risk - don’t you agree?
You should not use LOCALSYSTEM. This has far too much power and all best practice tells you not to use it.
In my view you should be creating a local user with appropriate rights as part of your installation. This is a fairly common practice for server/database products.
Sounds like you need to separate out your requirements.
You mention needing access to shares on other computers, but then you also mention that the machines this service will be installed on won't necessarily be part of the domain.
Have the service execute under a user account that grants you the appropriate LOCAL permissions. Then have some type of alternative user account with access to the appropriate shares that your service knows about and impersonates when needed.
Now, with regard to writing and creating files in the ROOT, that's going to be interesting. Your service will need full administrative permissions in order to do this on a Windows 7 box if UAC is turned on. Which, it would probably be safe to assume is on machines you don't directly control. Either eliminate this requirement or you'll have to live with the idea that your service is a security risk.

Windows Azure Visual Studio Solution

My application contains 25 C# projects, these projects are divided into 5 solutions.
Now I want to migrate these projects to run under Windows Azure, I realized that I should create one solution that contains all my web roles and worker roles.
Is this the correct way to do so, or still I can divide my projects into several solution.
The Projects are as shown below:
One Web application.
5 Windows Services.
The others are all class libraries.
Great answers by others. Let me add a bit more about one vs. many hosted services: If the Web and Worker roles need to interact directly (e.g. via TCP connection from Web role instance to a specific worker role instance), then those roles really should be placed in the same hosted service. External to the deployment, your hosted service listeners (web, wcf, etc.) are accessed by IP+Port; you cannot access a specific instance unless you also enable Azure Connect (VPN).
If your Web and Worker roles interact via Azure Queues or Service Bus, then have the option of deploying them to separate hosted services and still have the ability to communicate between them.
The most important question is: How many of these 25 projects are actual WebSites/Web Applications or Windows Services, and how many of them are just Class Libraries.
For the Class Libraries, you do not have to convert anything.
Now for the Cloud project(s). You have to decide how many hosted services you will create. You can read my blog post to get familiar with terms like "Hosted Service", "Role", "Role Instance", if you need to.
Once you decided your cloud structure - the number of hosted services and roles per each service, you can create a new solution for each hosted service.
You can also decide that you can host multiple web sites into a single WebRole, which is totally supported and possible, since WebRoles run in full IIS environment since SDK 1.3. You read more about hosting multiple web sites in single web role here and here, and even use the Windows Azure Accelerator for Web Roles.
If you have multiple windows services, or a background worker processes, you can combine them into a single Worker Role, or define a worker role for each separate worker process should you desire separate elasticity for each process, or your worker require a lot of computing power and memory.
UPDATE with regards to question update:
So, the Web Application is clear - it goes to one Web Role. Now for the Windows Services. There are two main considerations that you have to answer in order to find whether to put them into a single or more worker roles:
Does any of your Windows Services require excessive resources (i.e. a lot of computing power, or
lot of RAM)?
Does any of your Windows Services require independent scale?
If the answer for any of the questions is "yes", then put this Windows Service in a single Worker Role. Put all the Windows Services that the answer for both questions is "no" in a single Worker Role. That means that you will scale all of them or none of them (by manipulating the number of instances).
As for Cloud Service (or the Hosted Service), it is up to you to decide whether to use a single cloud service to deploy all the roles (Web and Workers) or use one Hosted service to deploy the Web Role and another one to deploy the Worker Roles. There is absolutelly no difference from billing prospective. You will still have your Web Role and Worker Role(s), and you will be charged based on instances count and data traffic. And you can independently scale any role (change the number of instances for a particular role) regardless of its deployment (within the same hosted service, or another hosted service).
At the end I suggest that you have single solution per Hosted Service (Cloud Project). So if you decide to have the Web Role and Worker Roles into a single Hosted Service, than you will have a single solution. If you have two Hosted Services (Cloud Projects), you will have two solutions.
Hope this helps.
You are correct ! and all projects goes to under 1 hosted service if you create only one cloud project for all your webrole and worker role project
Still you can divide your projects into several solution and you have to create that much cloud project and hosted service on azure platform
You can do both.
You can keep your 5 separate solutions as they are. Then, create a new solution that contains all 25 projects.
Which solution you choose to contain your Cloud (ccproj) project(s) will depend on how you want to distribute your application.
Each CCPROJ corresponds to 1 hosted service. So you could put all of your webs and workers into a single hosted service. Or you could have each web role as a different hosted service, and all of your worker roles together on another hosted service. Or you could do a combination of these. A definitive answer would require more information about your application, but in VS, a project can belong to more than 1 solution.

Is it possible to deploy an out-proc COM server on Windows Azure and alter its activation permission?

I need to consume an out-proc COM server from both a worker role and a web role in a Windows Azure application. One step I'm almost sure I'll need to do is to alter the access permissions for the COM server - grant "local launch" and "local activation" permissions for the predefined user under which roles code executes.
So far I found there's DCOMPERM utility in Windows SDK samples which contains code that I guess would do that. So I could write similar code and package it into either a separate executable or into the COM registration code of the COM server and run that code from a role start-up task. That's not trivial, but certainly doable.
I only have one major concern before I start.
Are there any reasons why I can't do that? Maybe using out-proc COM servers is not allowed on Windows Azure or something? Are there any such limitations?
Are there any reasons why I can't do that? Maybe using out-proc COM servers is not allowed on Windows Azure or something? Are there any such limitations?
It's not something I've personally done, but if you can install a COM+ server running in a shell exe, then I think you should be able to do what you want - see this recent blog post http://michaelwasham.com/2011/05/15/deploying-a-com-servicedcomponent-to-windows-azure/
I don't think you will hit limitations - but I think you will hit a fair few problems along the way - good luck.

Resources