Hoping I'll catch a break as a complete ECS NOOB =)
So far I think I've managed to build out a good start, using ECS-CLI to create a cluster for two docker containers that go together and should run on the same instance (x2). However, at the time of either my ecs-cli up or my ecs-cli compose, I need to get the command to build the task in such a way that it can access a set of environment variables stored in AWS Parameter Store. I don't see any options to do that with ecs-cli, for neither 'up' nor 'compose'.
What am I missing? Help! I'm not sure yet what other info to provide, or I'd post all that I can, so please let me know what other data/background you need if you think you have an answer.
All comments very much appreciated of course!
See the secrets section in the ecs-cli documentation here. You would configure ECS to pull those secrets from SSM Parameter Store and inject them as environment variables into your container using that method.
Related
I want to deploy my infrastructure in different AWS environments (dev, prod, qa).
That deployment creates a few EC2 instances from a custom AMI. When deployed, instances are in the "running" state. I understand this seems to be related to some constraint in the EC2 API. However, I don't necessarily want my instances started, depending on context. Sometimes, I just want the instances to be created, and they will be started later on. I guess this is a quite common scenario.
Reading the few related issues/requests on Hashicorp's github, makes me think so:
Terraform aws instance state change
Stop instances
aws_instance should allow to specify the instance state
There must be some TerraForm based solution which doesn't require to rely on AWS CLI / CDK or lambda, right? Something in the TerraForm script that, for example, would stop the instance right after its creation.
My google foo didn't help me much here. Any help / suggestion for dealing with that scenario is welcome.
Provisioning a new instance automatically puts it in a 'started' state.
As Marcin suggested, you can use user data scripts, here's some psuedo user data script. For you to figure out the actual implementation ;)
#!/bin/bash
get instance id, pass it to the subsequent line
aws ec2 stop-instances --instance-ids i-1234567890abcdef0
You can read about running scripts as part of the bootstrapping here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
Basically its all up to your use case. We don't do this generally. Still, if you want to provision your EC2 instances and need to put them in stopped state, as bschaatsbergen suggested, you can use the user_data in Terraform. Make sure to attach the role with relevant permission.
#!/bin/bash
INSTANCE_ID=`curl -s http://169.254.169.254/latest/meta-data/instance-id/`
REGION=`curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone | sed 's/.$//'`
aws ec2 stop-instances --instance-ids $INSTANCE_ID --region $REGION
As already stated by others, you cannot just "create" instances, they will be in "started" state.
Rather I would ask what is the exact use case here :
Sometimes, I just want the instances to be created, and they will be started later on.
Why you have to create instances now and use them later? Can't they be created exactly when they are required? Any specific requirement to keep them initialized before they are used? Or the instances take time to start?
I'm new to Axon and Docker and I would like to start axon server in Docker using developmnent mode in order to clear events as I'm in the process of building a system and my events and commands change often.
I read on Axon documentation that a certain property axoniq.axonserver.devmode.enabled (defaults to false) has to be set. I also know that Axon uses spring boot, so I guess I would need to somehow access the axonserver.properties on Docker, but here is the problem, i don't know how.
I would be thankful if anyone could explain how to change this configuration.
Fortunatelly Axon has been publishing blogs about running axon-server and one of them, they teach how to run it on docker =)
Blog post: https://axoniq.io/blog-overview/running-axon-server-in-docker
The important part, in your case, is here:
A third directory, not marked as a volume in the image, is important for our case: If you put an “axonserver.properties” file in “/config”, it can override the settings above and add new ones:
Which means, you can create your axonserver.properties in this directory with the desired property (axoniq.axonserver.devmode.enabled=true) and it will pick it up from there!
On the other hand, you can also set the environment variable: AXONIQ_AXONSERVER_DEVMODE_ENABLED to true.
Hope it helps.
I have a spring boot app which loads a yaml file at startup containing an encryption key that it needs to decrypt properties it receives from spring config.
Said yaml file is mounted as a k8s secret file at etc/config/springconfig.yaml
If my springboot is running I can still sh and view the yaml file with "docker exec -it 123456 sh" How can I prevent anyone from being able to view the encryption key?
You need to restrict access to the Docker daemon. If you are running a Kubernetes cluster the access to the nodes where one could execute docker exec ... should be heavily restricted.
You can delete that file, once your process fully gets started. Given your app doesnt need to read from that again.
OR,
You can set those properties via --env-file, and your app should read from environment then. But, still if you have possibility of someone logging-in to that container, he can read environment variables too.
OR,
Set those properties into JVM rather than system environment, by using -D. Spring can read properties from JVM environment too.
In general, the problem is even worse than just simple access to Docker daemon. Even if you prohibit SSH to worker nodes and no one can use Docker daemon directly - there is still possibility to read secret.
If anyone in namespace has access to create pods (which means ability to create deployments/statefulsets/daemonsets/jobs/cronjobs and so on) - it can easily create pod and mount secret inside it and simply read it. Even if someone have only ability to patch pods/deployments and so on - he potentially can read all secrets in namespace. There is no way how you can escape that.
For me that's the biggest security flaw in Kubernetes. And that's why you must very carefully give access to create and patch pods/deployments and so on. Always limit access to namespace, always exclude secrets from RBAC rules and always try to avoid giving pod creation capability.
A possibility is to use sysdig falco (https://sysdig.com/opensource/falco/). This tool will look at pod event, and can take action when a shell is started in your container. Typical action would be to immediately kill the container, so reading secret cannot occurs. And kubernetes will restart the container to avoid service interruption.
Note that you must forbids access to the node itself to avoid docker daemon access.
You can try mounting the secret as an environment variable. Once your application grabs the secret on startup, the application can then unset that variable rendering the secret inaccessible thereon.
I am running a command like the following.
serverless invoke local --function twilio_incoming_call
When I run locally in my code I plan to detect this and instead of looking for POST variables look for a MOCK file I'll be giving it.
I don't know how to detect if I'm running serverless with this local command however.
How do you do this?
I looked around on the serverless website and could find lots of info about running in local but not detecting if you were in local.
I found out the answer. process.env.IS_LOCAL will detect if you are running locally. Missed this on their website somehow...
If you're using AWS Lambda, it has some built-in environment variables. In the absence of those variables, then you can conclude that your function is running locally.
https://docs.aws.amazon.com/lambda/latest/dg/lambda-environment-variables.html
const isRunningLocally = !process.env.AWS_EXECUTION_ENV
This method works regardless of the framework you use whether you are using serverless, Apex UP, AWS SAM, etc.
You can also check what is in process.argv:
process.argv[1] will equal '/usr/local/bin/sls'
process.argv[2] will equal 'invoke'
process.argv[3] will equal 'local'
I'm creating an init.d script that will run a couple of tasks when the instance starts up.
it will create a new volume with our code repository and mount it if it doesn't exist already.
it will tag the instance
The tasks above being complete will be crucial for our site (i.e. without the code repository mounted the site won't work). How can I make sure that the server doesn't end up being publicly visible? Should I start my init.d script with de-registering the instance from the ELB (I'm not even sure if it will be registered at that point), and then register it again when all the tasks finished successfully?
What is the best practice?
Thanks!
You should have a health check on your ELB. So your server shouldn't get in unless it reports as happy. And it shouldn't report happy if the boot script errors out.
(Also, you should look into using cloud-init. That way you can change the boot script without making a new AMI.)
I suggest you use CloudFormation instead. You can bring up a full stack of your system by representing it in a JSON format template.
For example, you can create an autoscale group that has an instances with unique tags and the instances have another volume attached (which presumably has your code)
Here's a sample JSON template attaching an EBS volume to an instance:
https://s3.amazonaws.com/cloudformation-templates-us-east-1/EC2WithEBSSample.template
And here many other JSON templates that you can use for your guidance and deploy your specific Stack and Application.
http://aws.amazon.com/cloudformation/aws-cloudformation-templates/
Of course you can accomplish the same using init.d script or using the rc.local file in your instance but I believe CloudFormation is a cleaner solution from the outside (not inside your instance)
You can also write your own script that brings up your stack from the outside by why reinvent the wheel.
Hope this helps.