changing the sysctl hugepages of a AWS container - amazon-ec2

I am using the AWS EC2 task defintions to run a docker container on a aws ecs cluster.
The issue i am having is that i would like to set the vm.nr_hugepages value to 1280 for the host system.
AWS has included to option to add system controls to the task definition thus letting you change certain system values. https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#container_definition_systemcontrols
I have succeeded in adding the system controls to the task definition but AWS returns the following error when i try to create the task.
Unable to create Task Definition
The 'systemControls' namespace vm.nr_hugepages must start with ipc prefix 'fs.mqueue.' or network prefix 'net.' or be one of: [kernel.msgmax, kernel.msgmnb, kernel.msgmni, kernel.sem, kernel.shmall, kernel.shmmax, kernel.shmmni, kernel.shm_rmid_forced]'. Change the value and try again.
I am not very familiar with the linux kernel settings so i am unsure if i am doing something wrong or if this just not possible, does anyone know?
I am using a custom docker image based on alpine 3.7

You can specify only supported kernel parameters (sysctls) https://docs.docker.com/engine/reference/commandline/run/#configure-namespaced-kernel-parameters-sysctls-at-runtime
If you run ECS on EC2 you can update settings on instance boot.

Related

Running bash script on GCP VM instance programmatically

I've read multiple posts on running scripts on GCP VMs but unfortunately could not find an answer that would satisfy my needs.
I have a Go application and I'm looking for a way to run a bash script on a VM instance programatically.
I'm using a Google Cloud Golang SDK which allows me to fetch VM instance info. Unfortunately SDK does not contain a functionality that allows running a bash script on a specific instance(unlike an Azure Cloud SDK for example).
Options I've found:
Google Cloud Compute SDK has an option to set a startup script, that
will run every time an instance is restarted.
Add instance-level public SSH key. Establish an SSH connection and
run a script using Go SSH client.
Problems:
Obviously startup script will require an instance reboot and this is not possible in my use case.
SSH might be also problematic, in case instance is not running SSH
daemon or SSH port is not open. Also, SSH daemon config does not
permit root login by default(PermitRootLogin might be false), thus
script might be running on a non privileged user, making this option not
suitable either.
I should probably note that I am not authorised to change configuration of those VMs (for example change ssh daemon conf to permit root login), I can just use a token based authentication to access them, preferably through SDK, though other options are also possible as long as I am not exposing the instance to additional risks.
What options do I have? Is this even doable? Am I missing something?
Thanks!
As said by Kolban, there is no such API to trigger from outside a bash inside the VM. The best solution is to deploy a webserver (a REST API) that call the bash and to expose it (externally or internally).
But you can also cheat. You can create a daemon on your VM that you run with a startup script and that listen a custom metadata; let's say check it every seconds.
When the metadata is updated, the daemon can perform actions. You can imagine that the metadata contain the script to run with the parameters. At the end of the run, the metadata is cleaned by the daemon.
So now, to run your bash, call the setMetadata Api. It's not out of the box, but you can have something similar of what you expected.
Think of GCP as providing the virtual machine infrastructure such as compute, memory, disk and networking. What runs when the machine boots is between you and the machine image. I am hearing you say that you want to run a bash script within the VM. That is outside of the governance of GCP. GCP will only affect the operation and existence of the environment. If what you want to happen is run a script within the VM programatically you will need to run some form of demon inside the VM that can be signaled to run such a script. This could be a web server such as flask or express, it could be your SSH server or it could be some other technology you choose.
The core thing I think you were looking for was some GCP API that, when called, would run a script within the Compute Engine. I'm going to say that there is no such API.

How do I prevent access to a mounted secret file?

I have a spring boot app which loads a yaml file at startup containing an encryption key that it needs to decrypt properties it receives from spring config.
Said yaml file is mounted as a k8s secret file at etc/config/springconfig.yaml
If my springboot is running I can still sh and view the yaml file with "docker exec -it 123456 sh" How can I prevent anyone from being able to view the encryption key?
You need to restrict access to the Docker daemon. If you are running a Kubernetes cluster the access to the nodes where one could execute docker exec ... should be heavily restricted.
You can delete that file, once your process fully gets started. Given your app doesnt need to read from that again.
OR,
You can set those properties via --env-file, and your app should read from environment then. But, still if you have possibility of someone logging-in to that container, he can read environment variables too.
OR,
Set those properties into JVM rather than system environment, by using -D. Spring can read properties from JVM environment too.
In general, the problem is even worse than just simple access to Docker daemon. Even if you prohibit SSH to worker nodes and no one can use Docker daemon directly - there is still possibility to read secret.
If anyone in namespace has access to create pods (which means ability to create deployments/statefulsets/daemonsets/jobs/cronjobs and so on) - it can easily create pod and mount secret inside it and simply read it. Even if someone have only ability to patch pods/deployments and so on - he potentially can read all secrets in namespace. There is no way how you can escape that.
For me that's the biggest security flaw in Kubernetes. And that's why you must very carefully give access to create and patch pods/deployments and so on. Always limit access to namespace, always exclude secrets from RBAC rules and always try to avoid giving pod creation capability.
A possibility is to use sysdig falco (https://sysdig.com/opensource/falco/). This tool will look at pod event, and can take action when a shell is started in your container. Typical action would be to immediately kill the container, so reading secret cannot occurs. And kubernetes will restart the container to avoid service interruption.
Note that you must forbids access to the node itself to avoid docker daemon access.
You can try mounting the secret as an environment variable. Once your application grabs the secret on startup, the application can then unset that variable rendering the secret inaccessible thereon.

Windows Azure Virtual Machine with Startup Task

Is there a way to add a (parametrized) Startup task to a Windows Azure Virtual Machine through the API? I need to execute a cmdlet after the machine has been started and the code depends on two parameters that will be different for each machine. I know this could be easily achieved for a Web/Worker role, but could it be done for Virtual Machines, as well?
For first-time runs of a VM, you can inject a startup task via CustomData. This works in both Linux and Windows VMs. You'll just need to properly base-64-encode your file (whether it's text or binary) based on the REST API docs.
CustomData is dropped into a file in a specific location, and you can have code that looks for this file, taking some type of startup action as appropriate:
Windows: %SYSTEMDRIVE%\AzureData\CustomData.bin
Linux: /var/lib/waagent/CustomData
Note: This will be added to the CLI as well (the pull request is already available - not sure if it's in the latest build.
EDIT Yes, customdata is now part of the Azure CLI, as a parameter to azure vm create, so no need to mess with base-64 encoding on your own :
No. currently there is no such feature provided out of the box.
However, given you will deal with VM anyway, you can create an image of your own. You can register a "Startup Task" in RunOnce registry key. And sysprep the OS with this settings.
This way you will basically have a startup task which will be executed when your machine boots for the first time and will not be executed on consequent VM restarts.
Taking parameters into the code for VM is not as easy for Web/Worker Role. For anything you want you have to query the Azure Management API directly. The only properties you can get from code running on an Azure VM are basically the normal OS properties - i.e. host name, host IP Address. You don't even know your cloud service name, nor your Virtual IP Address (this can be discovered via services as whatismyip.net or similar). So my approach would be to put parameters into an Azure Table Storage and use Machine Name as rowKey. So I can store any VM specific values based on VM Name. And my "Startup" task would query the Table storage, providing my host name as rowKey (and some common pattern for Partition Key), so it gets all required settings.
With IaaS Management Studio you can set a startup script that will run when your VM boot.
In summary, it activates remote powershell and run your script remotely when it detects the powershell port is open.
I am the developer of this tool, but I don't really get what you mean by "parametized", in other words you want your script to have access to the VM info ?

How to create a EC2 instance from snapshot in cloudformation?

I'd like to specify the snapshot id which would be used to create a root device image for a EC2 instance created with cloudformation. How do I do that?
I could only find a way to make volume from a snapshot, but no way to use it in the instance.
If you want to use an EBS snapshot as the basis of the root disk (EBS volume) for an instance, you need to first register the snapshot as an AMI (e.g., using ec2-register).
Make sure to specify the correct architecture and kernel (AKI) when you register the snapshot as an AMI.
Alternatively, instead of taking a snapshot and registering it as separate steps, you could use the ec2-create-image command/API/console function to perform the snapshot and registration in a single step. This also takes care of picking the right architecture, kernel, and other parameters.
Once you have an AMI, you can tell CloudFormation to use that AMI when running a new instance.
I concur. This has nothing to do with cloudformation, but I just did this following a crippling 'do-release-upgrade'. It's just a matter of creating an image from the snapshot, and in my case making sure to change the virtualization type to "hardware assisted virtualization" (HVM). Then you can just launch the resulting image (AMI).

EC2 Amazon - User Data Not Working For Bundled/Snapshot AMI

I started an default instance of EC2 Wowza AMI (LINUX) and then I bundled/snapshot it via 'ec2-bundle-vol', uploaded it to s3 and registered the AMI.
When I start the bundled AMI with user data (zip file) with a script, it doesn't seem to execute it.
But when I start a default instance with the same user data (zip file), it works.
Does anyone know why my bundled AMI is not executing the user data I specify?
Thanks.
I'm not familiar with wowza or how they have their AMIs setup but...
On its own the ec2 user data does nothing - it only has relevance because a script running on that machine checks for the presence of the user data and does something with it.
Sometimes these scripts are set so that they only do stuff on the instance's first boot, they then drop a file somewhere so that on subsequent reboots the startup scripts aren't rerun.
If the wowza amis work on this basis then when you first boot up the ami this process is followed, so the data you've saved into the new AMI includes the "don't run startup scripts again" file. If this is the case you'd need to delete that file before creating your ami.
The user data mechanism on EC2 allows a script on the image to download the startup package as a file via HTTP from a link-local address (169.254.something) - if it's plaintext, it will execute directly. If it's compressed data, wowza startup will unpack it to /opt/working - the Wowza startup process is logged to wowzamediaserver_startup.log in Wowza's logs directory.
I had the same issue. Looking at our script I discovered that we were removing a cloud init dependancy in the script, making it a run once operation. The dependancy in question was boto.

Resources