I'm currently using AutoScalingGroups to manage the lifecycle of a nodes in a Stack. During the Stack creation, the scripts create an ssh key that is then shared with all of the nodes in the cluster to allow an admin user to be able to ssh between nodes inside the stack. This is a requirement of the software that is being deployed to have passwordless ssh access. It is also a requirement that each Stack have its own ssh key and not use a shared one.
Unfortunately, when the ASG replaces a bad node, the ssh key is not available in the new node. The /home/adminuser/.ssh/authorized_keys file does not contain the ssh key I created when the stack was created. I'm looking for a way to store the ssh key so that it can be added to the new node created by the ASG.
I found SSM which has the ability to put and get parameters:
http://docs.aws.amazon.com/cli/latest/reference/ssm/put-parameter.html
http://docs.aws.amazon.com/cli/latest/reference/ssm/get-parameter.html
This could work. I could store all Stack's ssh keys here and then each Stack could just query the repository for the private key. Unfortunately, this won't work because the values are visible by all nodes in all Stacks in my account. I want the parameters only available to nodes in the stack.
One Stack might be for Test while another for Production under the same account. I don't want the Test users having the ability to query the Parameters associated with the Production Stack.
Is there a way to put variables/parameters for a Stack that is only available to the Stack? Is there another way to do this?
Related
I have a spring boot app which loads a yaml file at startup containing an encryption key that it needs to decrypt properties it receives from spring config.
Said yaml file is mounted as a k8s secret file at etc/config/springconfig.yaml
If my springboot is running I can still sh and view the yaml file with "docker exec -it 123456 sh" How can I prevent anyone from being able to view the encryption key?
You need to restrict access to the Docker daemon. If you are running a Kubernetes cluster the access to the nodes where one could execute docker exec ... should be heavily restricted.
You can delete that file, once your process fully gets started. Given your app doesnt need to read from that again.
OR,
You can set those properties via --env-file, and your app should read from environment then. But, still if you have possibility of someone logging-in to that container, he can read environment variables too.
OR,
Set those properties into JVM rather than system environment, by using -D. Spring can read properties from JVM environment too.
In general, the problem is even worse than just simple access to Docker daemon. Even if you prohibit SSH to worker nodes and no one can use Docker daemon directly - there is still possibility to read secret.
If anyone in namespace has access to create pods (which means ability to create deployments/statefulsets/daemonsets/jobs/cronjobs and so on) - it can easily create pod and mount secret inside it and simply read it. Even if someone have only ability to patch pods/deployments and so on - he potentially can read all secrets in namespace. There is no way how you can escape that.
For me that's the biggest security flaw in Kubernetes. And that's why you must very carefully give access to create and patch pods/deployments and so on. Always limit access to namespace, always exclude secrets from RBAC rules and always try to avoid giving pod creation capability.
A possibility is to use sysdig falco (https://sysdig.com/opensource/falco/). This tool will look at pod event, and can take action when a shell is started in your container. Typical action would be to immediately kill the container, so reading secret cannot occurs. And kubernetes will restart the container to avoid service interruption.
Note that you must forbids access to the node itself to avoid docker daemon access.
You can try mounting the secret as an environment variable. Once your application grabs the secret on startup, the application can then unset that variable rendering the secret inaccessible thereon.
I don't know if this is possible I have only been using chef for about a week and a half now, I can't seem to find anything on the internet about doing this. But basically we have the chef client installed on a image. Each image has a configuration script that is run when the image is setup for the first time to set up the computer name and other settings specific to its setup.
So what I need to have happen once the config script finishes is to have a node created with the node name as the name of the computer that was entered automatically, along with aslo adding it to a role so that these nodes can later be sorted and have the correct roles added. So that going forward each new node will be created as soon as the server is setup without human interaction.
The way you do this is with the validator key system. Basically in the image have Chef installed, and have the /etc/chef/client.rb configuration created and pointed at your Chef Server, but don't create the client.pem key. If that key doesn't exist, chef-client will look for a validation key and use that to self-register with the Chef Server (by default it uses the FQDN of the server as the node name, but you can have your last-mile script append node_name "whatever" to the client.rb if you want to use something else). The difficult bit of validator-based bootstraps is how to store, access, and manage that validator key. The lazy way would be to just include it in the image, but this raises some troubling security issues. Unfortunately the best approach will depend entirely on what kind of systems you are running on and what security infrastructure is available. Also don't forget to remove the validator key after the initial bootstrap, there is a recipe in the chef-client cookbook for this.
I have an ec2 micro instance. I can start it from the console, ssh into it (using a .pem file) and visit the website it hosts.
Using the old ec2 CLI, I can start the instance and perform other actions including ssh and website access.
I am having trouble with the new ec2 CLI. When I do "aws ec2 start-instances --instance-ids i-xxx" I get the message "A client error (InvalidInstanceID.NotFound) occurred when calling the StartInstances operation: The instance ID 'i-xxx' does not exist".
Clearly the instance exists, so I don't what the message is really indicating.
Here is some of my thinking:
One difference between the old and new CLI is that the the later used .pem files whereas the new interface uses access key pairs. The instance has an access key pair associated with is, but have tried all the credentials I can find, and none of them change anything).
I tried created an IAM user and a new access key pair for it. The behavior in all cases is unchanged (start from console or old CLI, web access, ssh) but not using the new CLI.
I realize that there is a means for updating the access key pairs by detaching the volume (as described here), but the process looks a little scary to me.
I realize that I can clone another instance from the image, but the image is a little out of date, and I don't want to lose my changes.
Can anyone suggest what the message really means and what I can do to get around the problem?
The problem had to do with credentials. I had the correct environment
variables (AWS_ACCESS_KEY and AWS_SECRET_KEY) set. But they didn't match what was in my .aws/credentials file. That is, despite what it says here, the new CLI worked only when I had the environment variables and
the credentials file correct and in sync.
Configure your aws cli with "aws configure" in new cli instance with region in which your ec2 instance reside. and then try to give the same command. The instance should start.
I am creating up an AWS Cloud formation template which sets up a set of nodes which must allow keyless ssh login amongst themselves. i.e. One controller must be able to login to all slaves with its private key. The controllers private key is generated dynamically so I do not have access to be able to hard code it into the User-Data of the Template or pass it as a parameter to the template.
Is there a way in Cloud Formation templates to add the controller's public key to slave nodes' authorized keys files?
Is there some other way to use security groups or IAMS to do what is required?
You have to pas the Public key o the master server to the slave nodes in the form of user-data. Cloudformation does support user-data. You may have to figure out the syntax for the same.
In other words, consider it as a simple bash script which will copy the master servers's public key to the slaves. and then you pass this bash script as suer-data so that it gets executed for the 1st time the instance is created.
You will find tons of goggle searches on above information.
I would approach this problem with IAM machine roles. You can grant specific machines certain AWS rights. IAM roles do not apply to ssh access, but to AWS api calls, like s3 bucket access or creating ec2 instances.
Therefore, a solution might look like:
Create a controller machine role which can write to a particular S3 bucket.
Create a slave machine role which can read from that bucket.
Have the controller create an upload a public key into the bucket.
Since you don't know if the controller is created before the slaves, you'll have to have cloud-init set up a cron job every couple minutes that downloads the key from the bucket if it hasn't done so yet.
I lost access via ssh to my amazon ec2 instance and I need to access it NOW due to a problem with my service. I was told that there is a way to access the command-line via web with a java applet but I haven't been able to find it.
Is there a way to access the command-line without the .pem file? terminating/rebooting the instance is not feasible.
AFAIK it is not possible - Amazon does not retain private keys and once your instance has been assigned a keypair, it cannot be reassigned.
You could try to create a new instance with a separate keypair and ssh locally between them, but I don't imagine that that is possible.
If it's an EBS-based instance and you were able to stop it, you could mount the EBS volume to a new instance and copy a new key over; however, based on what you said, I don't believe it's possible. You may need to contact Amazon, but even then, there might not be anything that can be done.
Edit: on the same vein as the 2nd line, if you have other user accounts who have valid login shells, and you have sudo access on one of those accounts, you can do the same as mentioned in the last bit, where you generate a new keypair and upload the private key to ~/.ssh/authorized_keys.