Start services correctly and safely on Windows startup - windows

I would like to start a Windows service ((as a daemon or ScheduledTask) on one of the clients in a small Active Directory domain for testing purposes only. The Windows service is Docker Desktop. Ideally, the service should start before a user logs in.
I am wondering what is the recommended approach for this procedure to start services properly and securely.
My approach would be the following: I would create a local user and then assign the service to that user using Task Scheduler. Would that be broadly the recommended approach or is there some kind of system user to entrust the task to?

Related

Running bash script on GCP VM instance programmatically

I've read multiple posts on running scripts on GCP VMs but unfortunately could not find an answer that would satisfy my needs.
I have a Go application and I'm looking for a way to run a bash script on a VM instance programatically.
I'm using a Google Cloud Golang SDK which allows me to fetch VM instance info. Unfortunately SDK does not contain a functionality that allows running a bash script on a specific instance(unlike an Azure Cloud SDK for example).
Options I've found:
Google Cloud Compute SDK has an option to set a startup script, that
will run every time an instance is restarted.
Add instance-level public SSH key. Establish an SSH connection and
run a script using Go SSH client.
Problems:
Obviously startup script will require an instance reboot and this is not possible in my use case.
SSH might be also problematic, in case instance is not running SSH
daemon or SSH port is not open. Also, SSH daemon config does not
permit root login by default(PermitRootLogin might be false), thus
script might be running on a non privileged user, making this option not
suitable either.
I should probably note that I am not authorised to change configuration of those VMs (for example change ssh daemon conf to permit root login), I can just use a token based authentication to access them, preferably through SDK, though other options are also possible as long as I am not exposing the instance to additional risks.
What options do I have? Is this even doable? Am I missing something?
Thanks!
As said by Kolban, there is no such API to trigger from outside a bash inside the VM. The best solution is to deploy a webserver (a REST API) that call the bash and to expose it (externally or internally).
But you can also cheat. You can create a daemon on your VM that you run with a startup script and that listen a custom metadata; let's say check it every seconds.
When the metadata is updated, the daemon can perform actions. You can imagine that the metadata contain the script to run with the parameters. At the end of the run, the metadata is cleaned by the daemon.
So now, to run your bash, call the setMetadata Api. It's not out of the box, but you can have something similar of what you expected.
Think of GCP as providing the virtual machine infrastructure such as compute, memory, disk and networking. What runs when the machine boots is between you and the machine image. I am hearing you say that you want to run a bash script within the VM. That is outside of the governance of GCP. GCP will only affect the operation and existence of the environment. If what you want to happen is run a script within the VM programatically you will need to run some form of demon inside the VM that can be signaled to run such a script. This could be a web server such as flask or express, it could be your SSH server or it could be some other technology you choose.
The core thing I think you were looking for was some GCP API that, when called, would run a script within the Compute Engine. I'm going to say that there is no such API.

Creating Scheduled Task from GPO without Password

I'm trying to create a scheduled task in a Group Policy that runs a script that lives on the domain periodically.
I understand that storing the password in GP is a no no, so avoiding that. However, it seems like there is no way to deploy a scheduled task that can run with access to the network.
I tried the "System" account, that failed with access denied. I also tried using the "Do not store password" setting with a named account, which also prevents network access.
The scripts live in \domain\netlogon land and has full read access to authenticated users.
Is there anyway to accomplish this without having to manually install the task on every server and provide a named service account?
This is a Windows 2012 server domain with about 20 servers.
I ended up getting the "System" account to work correctly.

Windows Azure Virtual Machine with Startup Task

Is there a way to add a (parametrized) Startup task to a Windows Azure Virtual Machine through the API? I need to execute a cmdlet after the machine has been started and the code depends on two parameters that will be different for each machine. I know this could be easily achieved for a Web/Worker role, but could it be done for Virtual Machines, as well?
For first-time runs of a VM, you can inject a startup task via CustomData. This works in both Linux and Windows VMs. You'll just need to properly base-64-encode your file (whether it's text or binary) based on the REST API docs.
CustomData is dropped into a file in a specific location, and you can have code that looks for this file, taking some type of startup action as appropriate:
Windows: %SYSTEMDRIVE%\AzureData\CustomData.bin
Linux: /var/lib/waagent/CustomData
Note: This will be added to the CLI as well (the pull request is already available - not sure if it's in the latest build.
EDIT Yes, customdata is now part of the Azure CLI, as a parameter to azure vm create, so no need to mess with base-64 encoding on your own :
No. currently there is no such feature provided out of the box.
However, given you will deal with VM anyway, you can create an image of your own. You can register a "Startup Task" in RunOnce registry key. And sysprep the OS with this settings.
This way you will basically have a startup task which will be executed when your machine boots for the first time and will not be executed on consequent VM restarts.
Taking parameters into the code for VM is not as easy for Web/Worker Role. For anything you want you have to query the Azure Management API directly. The only properties you can get from code running on an Azure VM are basically the normal OS properties - i.e. host name, host IP Address. You don't even know your cloud service name, nor your Virtual IP Address (this can be discovered via services as whatismyip.net or similar). So my approach would be to put parameters into an Azure Table Storage and use Machine Name as rowKey. So I can store any VM specific values based on VM Name. And my "Startup" task would query the Table storage, providing my host name as rowKey (and some common pattern for Partition Key), so it gets all required settings.
With IaaS Management Studio you can set a startup script that will run when your VM boot.
In summary, it activates remote powershell and run your script remotely when it detects the powershell port is open.
I am the developer of this tool, but I don't really get what you mean by "parametized", in other words you want your script to have access to the VM info ?

Open a JDBC connection in a specific AS400 subsystem

I have a web service that calls some stored procedure on a AS400 via JTOpen.
What I would like to do is that the connections used to call the stored procedures was opened in a specific subsystem with a specific user, instead of qusrwrk/quser as now (default).
I think I can be able to clone the qusrwrk subsystem to make it start with a specific user, but what I cannot figure out is the mechanism to open the connection in the specific subsystem.
I guess there should be a property at connection level to say subsystem=MySubsystem.
But unfortunatly I haven't found that property.
Any hint would be appreciated.
Flavio
Let the system take care of the subsystem the job database server job is started in.
You should just focus on the application (which is what IBM i excels in).
If need be, you can tweak subsystem parameters for QUSRWRK to improve performance by allocating memory, etc.
The system uses a pool of prestarted jobs as described in the FAQ: When I do WRKACTJOB, why is the host server job running under QUSER instead of the profile specified on the AS400 object?
To improve performance, the host server jobs are prestarted jobs running under QUSER. When the Toolbox connects to a host server job in order to perform an API call, run a command, etc, a request is sent from the Toolbox to an available prestarted job. This request includes the user profile specified on the AS400 object that represents the connection. The host server job receives the request and swaps to the specified user profile before it runs the request. The host server itself originally runs under the QUSER profile, so output from the WRKACTJOB command will show the job as being owned by QUSER. However, the job is in fact running under the profile specified on the request. To determine what profile is being used for any given host server job, you can do one of three things:
1. Display the job log for that job and find the message indicating which user profile is used as a result of the swap.
2. Work with the job and display job status attributes to view the current user profile.
3. Use Navigator for i to view all of the server jobs, which will list the current user of each job. You can also use Navigator for i to look at the server jobs being used by a particular user.

On Terminal Server, how does a service start a process in a user's session?

From a Windows Service running on a Terminal Server (in global space), we would like to be able to start up a process running a windows application in a specific user's Terminal Server sessions.
How does one go about doing this?
The Scenerio: the windows service starts at boot time. After the user has logged into a Terminal Server user session, based on some criteria known only to the windows service, the windows service wants to start a process in the user's session running a windows application.
An example: We would like to display a 'Shutdown in 5 minutes' warning to the users. The windows service would detect this condition, and start up a process in each user session that starts the windows app that displays the warning. And, yes, I know there are other ways of displaying a warning dialog, this is the example, what we want to do is much more invasive.
You can use CreateProcessAsUser to do this - but it requires a bit of effort. I believe the following steps are the basic required procedure:
Get the user's session (WTSQuerySessionInformation).
Get a token for that user (WTSQueryUserToken).
Create a duplicate token for your use (DuplicateTokenEx).
Use the token to create an environment block (CreateEnvironmentBlock).
Launch the application with CreateProcessAsUser, using the block above.
You'll also want to make sure to clean up all of the appropriate handles, tokens, etc., after you've launched the process.
Really late reply but maybe somebody will find this helpful.
You can use PsExec to launch an application on a remote (or local) server inside a specified session by using the following command:
psexec \\COMPUTER_NAME -i SESSION_ID APPLICATION_NAME
Where SESSION_ID indicates the session id in which to launch the application.
You will need to know what sessions are active on the server and which session id maps to which user login. The following thread provides a nice code sample for this exact problem: How do you retrieve a list of logged-in/connected users in .NET?
Late reply but in the answer above DuplicateToken is not necessary since WTSQueryUserToken already returns a primary token.

Resources