Create LXC Containers On Proxmox With KeyCtl and Nesting - ansible

I have successfully created Ansible playbooks and roles to create and provision LXC containers on Proxmox. I'm now looking to use Ansible to run docker-compose files, ideally with the ability to spin up LXCs to run them on first.
I've created unprivileged containers successfully using Ansible, however before being able to use docker on the LXC I need to physically change the features of the container e.g
keyctl =1
nesting =1
Is anyone aware of doing this through an Ansible role ?

see https://pve.proxmox.com/wiki/Linux_Container to have a line matching :
features: [fuse=<1|0>] [,keyctl=<1|0>] [,mount=<fstype;fstype;...>] [,nesting=<1|0>]
in your /etc/pve/lxc/VMID.conf

Related

What's AWX EE real usage to run ansible playbook?

I installed AWX 19.5 in k8s. I found there are these pods, containers and ee by default:
Pods
awx-postgres-0
awx-8631936913-23hfa
awx-operator-controller-manager-8631936913-23hfa
Containers
In awx-8631936913-23hfa:
awx-web
awx-task
awx-ee
redis
In awx-ee container, I found ansible and ansible-galaxy, etc been installed.
Execution Environments
AWX EE (latest) - Image: quay.io/ansible/awx-ee:latest
Control Plane Execution Environment - Image: quay.io/ansible/awx-ee:latest
When run a job template, it seems AWX will create a new pod
...
automation-job-11-abcde
Even I choose the default AWX EE (latest) as Execution Environment, the same it created new pod then deleted.
So what's the role of awx-ee container in awx-8631936913-23hfa pod? It seems even set ansible configuration and galaxy installation there won't work for jobs.
I also wonder why "awx-ee" and "automation-job" exist at the same time.
"automation-job" actually execute the playbooks. but awx-ee doesn't seem to do anything.
automation-job: created by pulled ee_image. (it could be customized ee_image)

Can we have more than one installation of Rundeck in a Linux server?

I have one installation of Rundeck in a Linux server and it is up & running on port 4440. But I want to have one more installation of it and expecting it to run on other port. Is it possible? This question may look weird but I want to have additional setup of Rundeck due to personal reasons.
Eagerly looking for help. Thanks in advance.
You can test your "Personal instance" with a docker container without touch the "real instance" (or use two docker containers if you want), in both cases, you need to specify different ports (for example 4440 for "real" instance/container and 5550 to "test" container).
Here you have the official docker image, here about how to run, check the "Environment variables" section to specify the TCP port of each container (also, you have a lot of params to test).
And here you have a lot of configurations to test (LDAP, DB backends, etc..).
If you use Rundeck with Docker you must change the init.sh.
He is responsible of configuration overwrite at each container creation, so all your configuration updates are lost.
Doing this also avoid to have clear configuration params i' your docker-compose file...
The steps are :
create docker-compose file as mentioned on Rundeck Docker hub
map volumes on your host so you can save rundeck's files and directory
stop your container
comment config overwrite in init.sh
restart your container
You can then update rundeck's config on the fly and just restart rundeck container to see the changes...

Generate VMs based on Ansible Inventory prior to playbook run

So I'm looking at creating a generic wrapper around the ansible-playbook command.
What I'd like to do is spin up a number of VMs (Vagrant or docker), based on the inventory supplied.
I'd use these VMs locally for automated testing using molecule, as well as manual function testing.
Crucially the number of machines in the inventory could change, so these need created prior to the run.
Any thoughts?
Cheers,
Stuart
You could use a tool like Terraform to run your docker images, and then export the inventory from Terraform to Ansible using something like terraform-inventory.
I think there's also an Ansible provisioner for Terraform.

Docker run script in host on docker-compose up

My question relates to best practices on how to run a script on a docker-compose up directive.
Currently I'm sharing a volume between host and container to allow for the script changes to be visible to both host and container.
Similar to a watching script polling for changes on configuration file. The script has to act on host on changes according to predefined rules.
How could I start this script on a docker-compose up directive or even from the Dockerfile of the service, so that whenever the container goes up the "watcher" can find any changes being made and writing to.
The container in question will always run over a Debian / Ubuntu OS and should be architecture independent, meaning it should be able to run on ARM as well.
I wish to run a script on the Host, not inside the container. I need the Host to change its network interface configurations to easily adapt any environment The HOST needs to change I repeat.. This should be seamless to the user, and easily editable on a Web interface running Inside a CONTAINER to adapt to new environments.
I currently do this with a script running on the host based on crontab. I just wish to know the best practices and examples of how to run a script on HOST from INSIDE a CONTAINER, so that the deploy can be as easy for the installing operator to just run docker-compose up.
I just wish to know the best practices and examples of how to run a script on HOST from INSIDE a CONTAINER, so that the deploy can be as easy for the installing operator to just run docker-compose up
It seems that there is no best practice that can be applied to your case. A workaround proposed here: How to run shell script on host from docker container? is to use a client/server trick.
The host should run a small server (choose a port and specify a request type that you should be waiting for)
The container, after it starts, should send this request to that server
The host should then run the script / trigger the changes you want
This is something that might have serious security issues, so use at your own risk.
The script needs to run continuously in the foreground.
In your Dockerfile use the CMD directive and define the script as the parameter.
When using the cli, use docker run -d IMAGE SCRIPT
You can create an alias for docker-compose up. Put something like this in ~/.bash_aliases (in Ubuntu):
alias up="docker-compose up; ~/your_script.sh"
I'm not sure if running scripts on the host from a container is possible, but if it's possible, it's a severe security flaw. Containers should be isolated, that's the point of using containers.

How to start docker container using shell script inside the AWS EC2 Container Service?

I have docker image and follow these steps https://console.aws.amazon.com/ecs/home?region=us-east-1#/firstRun
and pushed docker image to the aws ec2 container service repo.After that
My container needs shell script to start the docker container.but i could not find any place to execute my shell script.
can you tell me correct way to running docker image using shell script inside the AWS EC2 container service.
Why would you use a shell script to start your container? ECS provides this out of the box when you properly configure a task definition and a task to run on one of your clusters. Your container should start running automatically once all of the resources are properly configured.
If I need to keep things self contained. I have like 20 env variables that are configured in a bash script file setting up the --env's for the container. So for me it will be great with just one liner like "./run-app.sh" thats set up --env's and runs it. But its not possible with ECS?
/Morten

Resources