Issues while creating custom windows image using packer - windows

I'm trying to create a custom windows image using packer script provided by github from https://github.com/actions/runner-images.
The script that is there in this repo uses azure platform but I want to use GCP to build it.
I have tried making some changes in the script but in the end I end with an Image where startup scripts don't run. Raised an issue for this https://github.com/actions/runner-images/issues/6565. Github has replied with "We only support azure".
Is there any alternative to this? I just want to install few tools like java, maven, etc on top of a windows-2019 image.
I've tried using this script https://github.com/actions/runner-images/blob/main/images/win/windows2019.json. But the resulting image has few issues and the startup script doesn't run when i create a new gcp vm instance with it.

Related

Dockerfile vs create image from container

Is there some difference between creating image using Dockerfile vs Creating image from container? (e.g. run a container from the same base as Dockerfile, transfer isntaller to the container, run them from command line and then create image from container).
At least I found out that installing VC Runtime from Windows Base docker container does not work :(
If you create an image using a Dockerfile, it's all but trivial to update the image by checking it out from source control, updating the tag on a base image or docker pulling a newer version of it, and re-running docker build.
If you create an image by running docker commit, and you discover in a year that there's a critical security vulnerability in the base Linux distribution and you need to stop using it immediately, you need to remember what it was you did a year ago to build the image and exactly what steps you did to repeat them, and you'd better hope you do them exactly the same way again. Oh, if only you had written down in a text file what base image you started FROM, what files you had to COPY in, and then what commands you need to RUN to set up the application in the image...
In short, writing a Dockerfile, committing it to source control, and running docker build is almost always vastly better practice than running docker commit. You can set up a continuous-integration system to rebuild the image whenever your source code changes; when there is that security vulnerability it's trivial to bump the FROM line to a newer base image and rebuild.

How do you update existing VMSS?

I'm lost as far as how do I work with existing deployment of VMSS which I perfomed using template in powershell. For example I want all VMs to have additional extension installed now and this was not part of original template. How do I add this extensions to all machines?
You can simply deploy the template again. It will only deploy the difference (so make sure you use the same username/password etc!)
minor edit: if you have upgradePolicy.mode set to "Manual", you will also have to do a "Update-AzureRmVmssInstance" call on each VM you want updated; if it's "Automatic", it will go out to all VMs automatically in parallel; if it's "Rolling" (preview here: https://github.com/Azure/vm-scale-sets/tree/master/preview/upgrade), it rolls out in batches.
You can use the Add-AzureRmVmssExtension PowerShell cmdlet to add an extension. Install the latest version of Azure PowerShell if you have not already.
Or 'az vmss extension set' if using CLI, for example in the Azure Cloud Shell.

How can I pull the latest Cloud Code?

I need to use the command line tool provided by Parse.com to get the latest Cloud Code from my application, how can I do that? I'm working with a team and can not overwrite the existing Cloud Code.
You can now download deployed Cloud Code through the CLI.
Through the command prompt (on Windows) type:
parse new
Parse will then ask you to provide your user credentials. Once provided, the command prompt will ask, Would you like to create a new app, or add Cloud Code to an existing app? Choose (e)xisting, and you will be provided with a list of apps you currently have access to to choose from and the rest is cake.
Make sure you update the Parse CLI in order to get this work by using:
parse update
You can use parse download command:
parse download -l [location]
Works great in our team. For more information about the command use
parse download --help.
Note: if no location is provided, code will be downloaded to a temporary folder.
UPDATE: You can now download your deployed code easily with the Parse CLI:
parse download
HISTORIAL: Previously (May 11, 2015) there are only 2 ways to get cloud code deployed by someone else on your team:
You get a copy directly from your teammate
You go to the Parse.com Core dashboard, tap on Cloud Code (below the Data section with all the classes), then click on each file on at a time and copy/paste the window contents into a file with the appropriate name.
Neither of these are ideal solutions.
Ideally, your team would use a two-part solution like this:
A version control system (like Github or similar) that keeps track of your most recent version
A dev mirror of your Parse.com app that gives you a sandbox for testing changes to the code

AWS CloudFormation and Windows Server 2008 R2 for Bootstrap file downloads

AWS released a new AMI recently which has CloudFormation tools installed by default on their Windows Server 2008 R2. The AMI itself can be found here :
[https://aws.amazon.com/amis/microsoft-windows-server-2008-r2-base-cloudformation]
When using this AMI directly within a CloudFormation template and launching the stack, I am able to launch my stack easily and the instance downloads my files located in S3 without any problem during boot up, all the folders created by cfn-init command can also be seen as expected.
However, if I modify the AMI to customize (just enabling IIS) it and recreate a new AMI and use this AMI within the template, the files don't get downloaded neither are the other folders suppose to be created by cfn-init command can be seen.
Any suggestions ?! Am I missing something ?!
Most probable cause of this is that the custom AMI was created without using EC2Config Service's Bundle tab.
CloudFormaion support on Windows depends on EC2Config service's functionality of running commands specified in User Data on first boot. This functionality automatically gets disabled after first boot so that the subsequent boots do not cause re-runs of the same commands.
If the custom AMI is created using EC2Config's Bundle tab , it ensures that the resulting AMI has the User Data command execution functionality enabled. Hence it is necessary (and always recommended) to create the custom AMI using EC2Config's Bundle tab.
Hope this helps.
Regards,
Shon

What is the Cloud-Init equivalent for Windows?

It seems that the stock bootstrapping process is a bit lacking on Windows.
Linux has cloud-init which will install packages, store files, and run a bash script from user data.
Windows has ec2config but there is currently no support to run a cmd or powershell script when the system is "ready"--meaning that all the initial reboots are completed.
There seem to be third party options. For example RightScale has the RightLink agent which performs this function.
Are there open source options available?
Are there any plans to add this feature to Ec2Config?
Do I have to build this my self?
Am I missing something?
It appears that EC2Config on the Amazon-provided AMIs now supports "User Data Scripts" as of the 11-April-2012 updates.
The documentation has not yet been updated, so it's hard to tell if it supports PowerShell or just cmd.exe scripts. I've posted a question on the AWS forums to try and get some more detail, and will update here when I learn more.
UPDATE: It looks like cmd.exe batch syntax is supported, which can in turn invoke PowerShell. There's a new version of the EC2Config documentation included on the AMI. Quoting from it:
[EC2Config] will read in the user data specified for the instance and then check if it contain the tags <script> and </script>. If it finds both then it will take the information between those two tags and save it to a batch file located in the Settings folder of this application. It will then execute the batch file during the start of an instance.
The batch file will only be created and executed on the first launch of an instance after a sysprep. If you want to have the batch file created and executed again set the Ec2HandleUserdata plugin state to Enabled.
UPDATE 2: My interpretation is confirmed by Shon from the AWS Team
UPDATE 3: And as of the May-2012 AMIs, PowerShell is supported using the <powershell/> tag.
Cloudbase.it have opensourced a python windows service they call cloudbase-init which follows the configdrive and HTTP datasources.
http://www.cloudbase.it/cloud-init-for-windows-instances/
github here
https://github.com/stackforge/cloudbase-init/
I had to build one myself however it was very easy. Just made a service that reads the user-data when starts up and executes the file as a powershell script.
To get around the issue of not knowing when to start the service I just made the service start type as "delayed-auto" and that seemed to fix the problem. Depending on what you need to do to the system that may or may not work for you however in my case that was all I had to do.
I added a new codeplex project that already has this tool built for windows. Looking forward to some feedback.
http://cloudinitnet.codeplex.com/
We had to build it ourselves; we did it with a custom service and built our own AMIs. There's no provision currently within EC2Config to do it.
Even better, there is no easy way to determine when the instance is "ready". We had to do it by tailing the logfile of EC2Config.
I've recently found nssm (at nssm.cc) which easily wraps a simple batch file (or pretty much anything else) as a service. You can then us sc config servic1 depend= service0 to force the batch file to be run at a particular point in the service initialization sequence. I am using it in between ex2config and sql express to create a folder on d, for instance. You'll have to use the services tool to make it run as network services and change the AppExit property to Ignore using regedit, but it works once you get it all in place.

Resources