I've read multiple posts on running scripts on GCP VMs but unfortunately could not find an answer that would satisfy my needs.
I have a Go application and I'm looking for a way to run a bash script on a VM instance programatically.
I'm using a Google Cloud Golang SDK which allows me to fetch VM instance info. Unfortunately SDK does not contain a functionality that allows running a bash script on a specific instance(unlike an Azure Cloud SDK for example).
Options I've found:
Google Cloud Compute SDK has an option to set a startup script, that
will run every time an instance is restarted.
Add instance-level public SSH key. Establish an SSH connection and
run a script using Go SSH client.
Problems:
Obviously startup script will require an instance reboot and this is not possible in my use case.
SSH might be also problematic, in case instance is not running SSH
daemon or SSH port is not open. Also, SSH daemon config does not
permit root login by default(PermitRootLogin might be false), thus
script might be running on a non privileged user, making this option not
suitable either.
I should probably note that I am not authorised to change configuration of those VMs (for example change ssh daemon conf to permit root login), I can just use a token based authentication to access them, preferably through SDK, though other options are also possible as long as I am not exposing the instance to additional risks.
What options do I have? Is this even doable? Am I missing something?
Thanks!
As said by Kolban, there is no such API to trigger from outside a bash inside the VM. The best solution is to deploy a webserver (a REST API) that call the bash and to expose it (externally or internally).
But you can also cheat. You can create a daemon on your VM that you run with a startup script and that listen a custom metadata; let's say check it every seconds.
When the metadata is updated, the daemon can perform actions. You can imagine that the metadata contain the script to run with the parameters. At the end of the run, the metadata is cleaned by the daemon.
So now, to run your bash, call the setMetadata Api. It's not out of the box, but you can have something similar of what you expected.
Think of GCP as providing the virtual machine infrastructure such as compute, memory, disk and networking. What runs when the machine boots is between you and the machine image. I am hearing you say that you want to run a bash script within the VM. That is outside of the governance of GCP. GCP will only affect the operation and existence of the environment. If what you want to happen is run a script within the VM programatically you will need to run some form of demon inside the VM that can be signaled to run such a script. This could be a web server such as flask or express, it could be your SSH server or it could be some other technology you choose.
The core thing I think you were looking for was some GCP API that, when called, would run a script within the Compute Engine. I'm going to say that there is no such API.
Related
I have a spring boot app which loads a yaml file at startup containing an encryption key that it needs to decrypt properties it receives from spring config.
Said yaml file is mounted as a k8s secret file at etc/config/springconfig.yaml
If my springboot is running I can still sh and view the yaml file with "docker exec -it 123456 sh" How can I prevent anyone from being able to view the encryption key?
You need to restrict access to the Docker daemon. If you are running a Kubernetes cluster the access to the nodes where one could execute docker exec ... should be heavily restricted.
You can delete that file, once your process fully gets started. Given your app doesnt need to read from that again.
OR,
You can set those properties via --env-file, and your app should read from environment then. But, still if you have possibility of someone logging-in to that container, he can read environment variables too.
OR,
Set those properties into JVM rather than system environment, by using -D. Spring can read properties from JVM environment too.
In general, the problem is even worse than just simple access to Docker daemon. Even if you prohibit SSH to worker nodes and no one can use Docker daemon directly - there is still possibility to read secret.
If anyone in namespace has access to create pods (which means ability to create deployments/statefulsets/daemonsets/jobs/cronjobs and so on) - it can easily create pod and mount secret inside it and simply read it. Even if someone have only ability to patch pods/deployments and so on - he potentially can read all secrets in namespace. There is no way how you can escape that.
For me that's the biggest security flaw in Kubernetes. And that's why you must very carefully give access to create and patch pods/deployments and so on. Always limit access to namespace, always exclude secrets from RBAC rules and always try to avoid giving pod creation capability.
A possibility is to use sysdig falco (https://sysdig.com/opensource/falco/). This tool will look at pod event, and can take action when a shell is started in your container. Typical action would be to immediately kill the container, so reading secret cannot occurs. And kubernetes will restart the container to avoid service interruption.
Note that you must forbids access to the node itself to avoid docker daemon access.
You can try mounting the secret as an environment variable. Once your application grabs the secret on startup, the application can then unset that variable rendering the secret inaccessible thereon.
The scenario is as follows:
I have TeamCity set up to use AWS EC2 hosts running Windows Server 2012 R2 as build agents. In this configuration, the TeamCity agent service is running as SYSTEM. I am trying to implement FastBuild as our new compilation process. In order to use the distributed compilation functionality of FastBuild, the build agent host needs to have access to a shared network folder. Unfortunately, I cannot seem to give this kind of access from one machine to another.
To help further the explanation, I'll use named examples. The networked folder, C:\Shared-Folder, lives on a host named Central-Host. The build agent lives on Builder-Host. Everything is running Windows Server 2012 R2 on EC2 hosts that are fully network permissive to each other via AWS security groups. What I need is to share a directory from Central-Host so that Builder-Host can fully access it via a directory structure like this:
\\Central-Host\Shared-Folder
By RDPing into both hosts using the default Administrator account, I can very easily set up the network sharing and browse (while on Builder-Host) to the \\Central-Host\Shared-Folder location. I can also open up the command line and run:
type NUL > \\Central-Host\Shared-Folder\Empty.txt
with the result of an empty text file being created at that networked location.
The problem arises from the SYSTEM account. When I grab PSTOOLS and use the command:
PSEXEC -i -s cmd.exe
I can test commands that will be given by TeamCity. Again, it is a service being run as SYSTEM which, I need to emphasize, cannot be changed to a normal User due to other issues we have when using TeamCity agents under the User account type.
After much searching I have discovered how to set up Active Directory services so that I can add Users and Computers from the domain but after doing so, I still face access denied errors. I am probably missing something important and I hope someone here can help. I believe this problem will be considered "solved" when I can successfully run the "type NUL" command shown above.
This is not an answer for the permissions issue, but rather a way to avoid it. (Wanted to add this as a comment, but StackOverflow won't let me - weird.)
The shared network drive is used only for the remote worker discovery. If you have a fixed list of workers, instead of using the worker discovery, you can specify them explicitly in your config file as follows:
Settings
{
.Workers =
{
'hostname1' // specify hostname
'hostname2'
'192.168.0.10' // or ip
}
... // the other stuff that goes here
This functionality is not documented, as to-date all users have wanted the automatic worker discovery. It is fine to use however, and if it is indeed useful, it can be elevated to a supported feature with just a documentation update.
Is there a way to add a (parametrized) Startup task to a Windows Azure Virtual Machine through the API? I need to execute a cmdlet after the machine has been started and the code depends on two parameters that will be different for each machine. I know this could be easily achieved for a Web/Worker role, but could it be done for Virtual Machines, as well?
For first-time runs of a VM, you can inject a startup task via CustomData. This works in both Linux and Windows VMs. You'll just need to properly base-64-encode your file (whether it's text or binary) based on the REST API docs.
CustomData is dropped into a file in a specific location, and you can have code that looks for this file, taking some type of startup action as appropriate:
Windows: %SYSTEMDRIVE%\AzureData\CustomData.bin
Linux: /var/lib/waagent/CustomData
Note: This will be added to the CLI as well (the pull request is already available - not sure if it's in the latest build.
EDIT Yes, customdata is now part of the Azure CLI, as a parameter to azure vm create, so no need to mess with base-64 encoding on your own :
No. currently there is no such feature provided out of the box.
However, given you will deal with VM anyway, you can create an image of your own. You can register a "Startup Task" in RunOnce registry key. And sysprep the OS with this settings.
This way you will basically have a startup task which will be executed when your machine boots for the first time and will not be executed on consequent VM restarts.
Taking parameters into the code for VM is not as easy for Web/Worker Role. For anything you want you have to query the Azure Management API directly. The only properties you can get from code running on an Azure VM are basically the normal OS properties - i.e. host name, host IP Address. You don't even know your cloud service name, nor your Virtual IP Address (this can be discovered via services as whatismyip.net or similar). So my approach would be to put parameters into an Azure Table Storage and use Machine Name as rowKey. So I can store any VM specific values based on VM Name. And my "Startup" task would query the Table storage, providing my host name as rowKey (and some common pattern for Partition Key), so it gets all required settings.
With IaaS Management Studio you can set a startup script that will run when your VM boot.
In summary, it activates remote powershell and run your script remotely when it detects the powershell port is open.
I am the developer of this tool, but I don't really get what you mean by "parametized", in other words you want your script to have access to the VM info ?
I want a single script that can lauch, and tag my instances which I can then use chef to configure them accordingly.
Say my service requires 10 instances, I want to be able to run 10 instances, then tag them according to their role (web, db, app server).
Then once I do that, I can use chef to connect to each one and configure them how i want.
But I'm confused, I know I can launch instances, but how do you wait for them to come online? Do you have to continously loop in some sort of a timer? That seems like a very hacky way to do it!
If you're going to do everything from the outside you do just have to poll to wait for the instance to be ready (which doesn't necessarily mean its ready to use - actual startup completed a little later)
You can also pass user data when you start an instance. Most amis support cloud init, and will interpret the data passed as a shell script if in the right format. That shell script could run chef or do other configuration tasks.
is it possible to send a "user-data" to the Windows instance at the loading? I know that amazon allow to send it to *nix-based instance, but I can't find any information for Windows.
Thanks for the help,
Cyril
Amazon updated EC2Config on Windows AMIs on April 11, 2012 to support scripting through user-data for batch scripts and in May 2012 to support powershell scripts.
<script></script> tags will create and execute a batch file.
<powershell></powershell> tags will create and execute a powershell script.
Note that by default it only runs at instance initialization, so if you want it to execute each time you boot, you have to run the EC2ConfigServiceSettings and tell it to allow this always.
I am not aware of a direct way about it. But, you can create a start-up script inside your instance, that will allow you reading user-data each time you reboot your system. Inside your user-data, you can configure what's going to run only once or every single time your instance loads.