Windows Azure Virtual Machine with Startup Task - windows

Is there a way to add a (parametrized) Startup task to a Windows Azure Virtual Machine through the API? I need to execute a cmdlet after the machine has been started and the code depends on two parameters that will be different for each machine. I know this could be easily achieved for a Web/Worker role, but could it be done for Virtual Machines, as well?

For first-time runs of a VM, you can inject a startup task via CustomData. This works in both Linux and Windows VMs. You'll just need to properly base-64-encode your file (whether it's text or binary) based on the REST API docs.
CustomData is dropped into a file in a specific location, and you can have code that looks for this file, taking some type of startup action as appropriate:
Windows: %SYSTEMDRIVE%\AzureData\CustomData.bin
Linux: /var/lib/waagent/CustomData
Note: This will be added to the CLI as well (the pull request is already available - not sure if it's in the latest build.
EDIT Yes, customdata is now part of the Azure CLI, as a parameter to azure vm create, so no need to mess with base-64 encoding on your own :

No. currently there is no such feature provided out of the box.
However, given you will deal with VM anyway, you can create an image of your own. You can register a "Startup Task" in RunOnce registry key. And sysprep the OS with this settings.
This way you will basically have a startup task which will be executed when your machine boots for the first time and will not be executed on consequent VM restarts.
Taking parameters into the code for VM is not as easy for Web/Worker Role. For anything you want you have to query the Azure Management API directly. The only properties you can get from code running on an Azure VM are basically the normal OS properties - i.e. host name, host IP Address. You don't even know your cloud service name, nor your Virtual IP Address (this can be discovered via services as whatismyip.net or similar). So my approach would be to put parameters into an Azure Table Storage and use Machine Name as rowKey. So I can store any VM specific values based on VM Name. And my "Startup" task would query the Table storage, providing my host name as rowKey (and some common pattern for Partition Key), so it gets all required settings.

With IaaS Management Studio you can set a startup script that will run when your VM boot.
In summary, it activates remote powershell and run your script remotely when it detects the powershell port is open.
I am the developer of this tool, but I don't really get what you mean by "parametized", in other words you want your script to have access to the VM info ?

Related

Start services correctly and safely on Windows startup

I would like to start a Windows service ((as a daemon or ScheduledTask) on one of the clients in a small Active Directory domain for testing purposes only. The Windows service is Docker Desktop. Ideally, the service should start before a user logs in.
I am wondering what is the recommended approach for this procedure to start services properly and securely.
My approach would be the following: I would create a local user and then assign the service to that user using Task Scheduler. Would that be broadly the recommended approach or is there some kind of system user to entrust the task to?

Running bash script on GCP VM instance programmatically

I've read multiple posts on running scripts on GCP VMs but unfortunately could not find an answer that would satisfy my needs.
I have a Go application and I'm looking for a way to run a bash script on a VM instance programatically.
I'm using a Google Cloud Golang SDK which allows me to fetch VM instance info. Unfortunately SDK does not contain a functionality that allows running a bash script on a specific instance(unlike an Azure Cloud SDK for example).
Options I've found:
Google Cloud Compute SDK has an option to set a startup script, that
will run every time an instance is restarted.
Add instance-level public SSH key. Establish an SSH connection and
run a script using Go SSH client.
Problems:
Obviously startup script will require an instance reboot and this is not possible in my use case.
SSH might be also problematic, in case instance is not running SSH
daemon or SSH port is not open. Also, SSH daemon config does not
permit root login by default(PermitRootLogin might be false), thus
script might be running on a non privileged user, making this option not
suitable either.
I should probably note that I am not authorised to change configuration of those VMs (for example change ssh daemon conf to permit root login), I can just use a token based authentication to access them, preferably through SDK, though other options are also possible as long as I am not exposing the instance to additional risks.
What options do I have? Is this even doable? Am I missing something?
Thanks!
As said by Kolban, there is no such API to trigger from outside a bash inside the VM. The best solution is to deploy a webserver (a REST API) that call the bash and to expose it (externally or internally).
But you can also cheat. You can create a daemon on your VM that you run with a startup script and that listen a custom metadata; let's say check it every seconds.
When the metadata is updated, the daemon can perform actions. You can imagine that the metadata contain the script to run with the parameters. At the end of the run, the metadata is cleaned by the daemon.
So now, to run your bash, call the setMetadata Api. It's not out of the box, but you can have something similar of what you expected.
Think of GCP as providing the virtual machine infrastructure such as compute, memory, disk and networking. What runs when the machine boots is between you and the machine image. I am hearing you say that you want to run a bash script within the VM. That is outside of the governance of GCP. GCP will only affect the operation and existence of the environment. If what you want to happen is run a script within the VM programatically you will need to run some form of demon inside the VM that can be signaled to run such a script. This could be a web server such as flask or express, it could be your SSH server or it could be some other technology you choose.
The core thing I think you were looking for was some GCP API that, when called, would run a script within the Compute Engine. I'm going to say that there is no such API.

How do I give a service running as SYSTEM shared directory network access over EC2 hosts running Windows Server 2012?

The scenario is as follows:
I have TeamCity set up to use AWS EC2 hosts running Windows Server 2012 R2 as build agents. In this configuration, the TeamCity agent service is running as SYSTEM. I am trying to implement FastBuild as our new compilation process. In order to use the distributed compilation functionality of FastBuild, the build agent host needs to have access to a shared network folder. Unfortunately, I cannot seem to give this kind of access from one machine to another.
To help further the explanation, I'll use named examples. The networked folder, C:\Shared-Folder, lives on a host named Central-Host. The build agent lives on Builder-Host. Everything is running Windows Server 2012 R2 on EC2 hosts that are fully network permissive to each other via AWS security groups. What I need is to share a directory from Central-Host so that Builder-Host can fully access it via a directory structure like this:
\\Central-Host\Shared-Folder
By RDPing into both hosts using the default Administrator account, I can very easily set up the network sharing and browse (while on Builder-Host) to the \\Central-Host\Shared-Folder location. I can also open up the command line and run:
type NUL > \\Central-Host\Shared-Folder\Empty.txt
with the result of an empty text file being created at that networked location.
The problem arises from the SYSTEM account. When I grab PSTOOLS and use the command:
PSEXEC -i -s cmd.exe
I can test commands that will be given by TeamCity. Again, it is a service being run as SYSTEM which, I need to emphasize, cannot be changed to a normal User due to other issues we have when using TeamCity agents under the User account type.
After much searching I have discovered how to set up Active Directory services so that I can add Users and Computers from the domain but after doing so, I still face access denied errors. I am probably missing something important and I hope someone here can help. I believe this problem will be considered "solved" when I can successfully run the "type NUL" command shown above.
This is not an answer for the permissions issue, but rather a way to avoid it. (Wanted to add this as a comment, but StackOverflow won't let me - weird.)
The shared network drive is used only for the remote worker discovery. If you have a fixed list of workers, instead of using the worker discovery, you can specify them explicitly in your config file as follows:
Settings
{
.Workers =
{
'hostname1' // specify hostname
'hostname2'
'192.168.0.10' // or ip
}
... // the other stuff that goes here
This functionality is not documented, as to-date all users have wanted the automatic worker discovery. It is fine to use however, and if it is indeed useful, it can be elevated to a supported feature with just a documentation update.

AWS EC2: Instance from my own Windows AMI is not reachable

I am windows user and wanted to use a spot instance using my own EBS windows AMI. For this I have followed these steps:
I had my own on-demand instance with specific settings
Using AWS console I used option "Create Image EBS" to create EBS based windows AMI. IT worked and AMI created successfully
Then using this new AMI I launched a spot medium instance that was created well and now running with status checks passed.
After waiting an hour or more I am trying to connect it using windows 7 RDC client but is not reachable with client tool's standard error that either computer is not reachable or not powered on.
I have tried to achieve this goal and created/ deleted many volums, instances, snapshots everything but still unsuccessful. Doesn't anybody else have any solution to this problem?
Thanks
Basically what's happening is that the existing administrator password (and other user authentication information) for Windows is only valid in the original instance, and can't be used on the new "hardware" that you're launching the AMI on (even though it's all virtualized).
This is why RDP connections will fail to newly launched instances, as will any attempts to retrieve the administrator password. Unfortunately you have no choice but to shut down the new instances you've been trying to connect to because you won't be able to do anything with them.
For various reasons the Windows administrator password cannot be preserved when running a snapshot of the operating system on different hardware (even virtualized hardware) - this is a big part of the reason why technologies like Active Directory exist, so that user authentication information is portable between machines and networks.
It sounds like you did all the steps necessary to get this working except for one - you never took any steps to cause a new password to be assigned to your newly-launched instances based on the original AMI you created.
To fix this, BEFORE turning your instance into a custom AMI that can be used to launch new instances, you need to (in the original instance) run the Ec2ConfigService Settings tool (found in the start menu when remoted into the original instance using RDP), and enable the option to generate a new password on next reboot. Save this setting change.
Then when you do create an AMI from the original instance, and use that AMI to launch new instances, they will each boot up to your custom Windows image but will choose their own random administrator password.
At this point you can go to your ec2 dashboard and retrieve the newly-generated password for the new instance based on the old AMI, and you'll also be able to download the RDP file used to connect to it.
One additional note is that Amazon warns that it can take upwards of 30 minutes for the retrieval of an administrator password after launching a new instance, however in my previous experience I've never had to wait more than a few minutes to be able to get it.
Your problem is most likely that the security group you used to launch the AMI does not have RDP (TCP port #3389) enabled.
When you launch the windows AMI for the first time, AWS will populate the quicklaunch with this port enabled. However, when you launch the subsequent AMI, you will have to ensure that this port is open for your security group.

Windows - Private hosts file for a certain environment

I've an application running on a dev server and connecting to a dev-db hosting an oracle instance.
Now i'm deploying the on a prod/prod-db machine
Since the dev-db url is hardcoded inside the java code, the just-copied binaries still points to dev-db. As a quick warkaround i added a line in Windows Host file on prod so that dev-db now points to prod-db IP address. It's work, but i'm not very satisfied of this global-scope solution.
I was wondering if exits a way to make a hosts file "private" for a certain environments ie. only valid in the scope of my running application
No, there's no way to do this, and it's a bad approach anyway.
You should instead fix the real problem, which is the hard-coding of the address inside your java code. Put such things in a properties file, and use a different properties file for production.

Resources