Is it Possible to Have Docker Compose Read from AWS Secrets Manager? - bash

I currently have a bash script that "simulates" an ECS task by spinning up 3 containers. Some of the containers pull their secrets and configuration overrides from Secrets Manager directly(e.g. it's baked into the container code), while others have configuration overrides that are being done with Docker Environment variables which requires the Secrets be retrieve first from ASM, exported to variables, then starting the container with the environment variables just exported. This works fine and this is done just for developers to test locally on their workstations. We do not deploy with Docker-Compose. The current bash script makes calls out to AWS and exports the values to Environment variables.
However, I would like to use Docker Compose going forward. The question I have is "Is there a way for Docker Compose to call out to AWS and get the secrets?"
I don't see a native way to do this with Docker Compose, so I am thinking of going out and getting ALL the secrets for ALL the containers. So, my current script would be modified to do this:
The Bash the script would get all the secrets and export these values to environment variables.
The script would then call the Docker-compose yaml and reference the exported variables created in step 1 above.
It would be nice if I didn't have to use the bash script at all, but I know of no intrinsic way of pulling secrets from Secrets Manager from the Docker-Compose yaml. Is this possible?

Related

How can I run AWS Lambda locally and access DynamoDB?

I try to run and test an AWS Lambda service written in Golang locally using SAM CLI. I have two problems:
The Lambda does not work locally if I use .zip files. When I deploy the code to AWS, it works without an issue, but if I try to run locally with .zip files, I get the following error:
A required privilege is not held by the client: 'handler' -> 'C:\Users\user\AppData\Local\Temp\tmpbvrpc0a9\bootstrap'
If I don't use .zip, then it works locally, but I still want to deploy as .zip and it is not feasible to change the template.yml every time I want to test locally
If I try to access AWS resources, I need to set the following environment variables:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN
However, if I set these variables in template.yml and then use sam local start-api --env-vars to fill them with the credentials, then the local environment works and can access AWS resources, but when I deploy the code to the real AWS, it gives an error, since these variables are reserved. I also tried to use different names for these variables, but then the local environment does not work, and also tried to omit these from template.yml and just use the local env-vars, but environment variables must be present in template.yml and cannot be created with env-vars, can only fill existing variables with values.
How can I make local env work but still be able to deploy to AWS?
For accessing AWS resources you need to look at IAM permissions rather than using programmatic access keys, check this document out for cloudformation.
To be clear virtually nothing deployed on AWS needs those keys, it's all about applying permissions to X(lambda, ec2 etc etc) - those keys are only really needed for the aws cli and some local envs like serverless and sam
The serverless framework now supports golang, if you're new I'd say give that a go while you get up to speed with IAM/Cloudformation.

Post deployment script that reads environment variable inside deployed pod

I have kubernetes job whose responsibility is to post a jar to flink (via flink api) and run the Jar.
In response it is going to get a Job id from flink api, which i need to use in test script to see if my job is running or not. The job is going to run inside the container/pod spawned by job.yaml and test script is not going to run from the same pod/container spawned by job.yaml.
If i save this job id as environment variable inside the container/pod spawned by job.yaml, is there a way to access that environment variable outside the pod. I am not even allowed manually to get into the container (to print environment variables) using kubectl exec -it podname /bin/bash/ command saying I cant get in inside a completed (not running) Pod..So I am not sure if i can do the same via script..
Are there any alternatives for me to access the job id in test scripts by making use of environment variable i set inside the container/pod (spawned by job.yaml)..?
In summary is there a way to access the environment variable i set inside Pod, by using a script that runs out side of the pod?
Thank you...
Pavan.
No you can't use environment variable for that.
You could add an annotation from inside your pod.
For that you will need to setuo:
Service account to be able to annotate your self
Downward API
Then you will be able to access it from another pod/container

Can I use a DockerFile as a script?

We would like to leverage the excellent catalogue of DockerFiles on DockerHub, but the team is not in a position to use Docker.
Is there any way to run a DockerFile as if it were a shell script against a machine?
For example, if I chose to run the Docker container ruby:2.4.1-jessie against a server running only Debian Jessie, I'd expect it to ignore the FROM directive but be able to set the environment from ENV and run the RUN commands from this Dockerfile: Github docker-library/ruby:2.4.1-jessie
A dockerfile assumes to be executed in an empty container or an image on which it builds (using FROM). The knowledge about the environment (specifically the file system and all the installed software) is important and running something similar outside of docker might have side effects because files are at places where no files are expected.
I wouldn't recomend it

Docker run script in host on docker-compose up

My question relates to best practices on how to run a script on a docker-compose up directive.
Currently I'm sharing a volume between host and container to allow for the script changes to be visible to both host and container.
Similar to a watching script polling for changes on configuration file. The script has to act on host on changes according to predefined rules.
How could I start this script on a docker-compose up directive or even from the Dockerfile of the service, so that whenever the container goes up the "watcher" can find any changes being made and writing to.
The container in question will always run over a Debian / Ubuntu OS and should be architecture independent, meaning it should be able to run on ARM as well.
I wish to run a script on the Host, not inside the container. I need the Host to change its network interface configurations to easily adapt any environment The HOST needs to change I repeat.. This should be seamless to the user, and easily editable on a Web interface running Inside a CONTAINER to adapt to new environments.
I currently do this with a script running on the host based on crontab. I just wish to know the best practices and examples of how to run a script on HOST from INSIDE a CONTAINER, so that the deploy can be as easy for the installing operator to just run docker-compose up.
I just wish to know the best practices and examples of how to run a script on HOST from INSIDE a CONTAINER, so that the deploy can be as easy for the installing operator to just run docker-compose up
It seems that there is no best practice that can be applied to your case. A workaround proposed here: How to run shell script on host from docker container? is to use a client/server trick.
The host should run a small server (choose a port and specify a request type that you should be waiting for)
The container, after it starts, should send this request to that server
The host should then run the script / trigger the changes you want
This is something that might have serious security issues, so use at your own risk.
The script needs to run continuously in the foreground.
In your Dockerfile use the CMD directive and define the script as the parameter.
When using the cli, use docker run -d IMAGE SCRIPT
You can create an alias for docker-compose up. Put something like this in ~/.bash_aliases (in Ubuntu):
alias up="docker-compose up; ~/your_script.sh"
I'm not sure if running scripts on the host from a container is possible, but if it's possible, it's a severe security flaw. Containers should be isolated, that's the point of using containers.

How to start docker container using shell script inside the AWS EC2 Container Service?

I have docker image and follow these steps https://console.aws.amazon.com/ecs/home?region=us-east-1#/firstRun
and pushed docker image to the aws ec2 container service repo.After that
My container needs shell script to start the docker container.but i could not find any place to execute my shell script.
can you tell me correct way to running docker image using shell script inside the AWS EC2 container service.
Why would you use a shell script to start your container? ECS provides this out of the box when you properly configure a task definition and a task to run on one of your clusters. Your container should start running automatically once all of the resources are properly configured.
If I need to keep things self contained. I have like 20 env variables that are configured in a bash script file setting up the --env's for the container. So for me it will be great with just one liner like "./run-app.sh" thats set up --env's and runs it. But its not possible with ECS?
/Morten

Resources