I installed AWX 19.5 in k8s. I found there are these pods, containers and ee by default:
Pods
awx-postgres-0
awx-8631936913-23hfa
awx-operator-controller-manager-8631936913-23hfa
Containers
In awx-8631936913-23hfa:
awx-web
awx-task
awx-ee
redis
In awx-ee container, I found ansible and ansible-galaxy, etc been installed.
Execution Environments
AWX EE (latest) - Image: quay.io/ansible/awx-ee:latest
Control Plane Execution Environment - Image: quay.io/ansible/awx-ee:latest
When run a job template, it seems AWX will create a new pod
...
automation-job-11-abcde
Even I choose the default AWX EE (latest) as Execution Environment, the same it created new pod then deleted.
So what's the role of awx-ee container in awx-8631936913-23hfa pod? It seems even set ansible configuration and galaxy installation there won't work for jobs.
I also wonder why "awx-ee" and "automation-job" exist at the same time.
"automation-job" actually execute the playbooks. but awx-ee doesn't seem to do anything.
automation-job: created by pulled ee_image. (it could be customized ee_image)
Related
I have successfully created Ansible playbooks and roles to create and provision LXC containers on Proxmox. I'm now looking to use Ansible to run docker-compose files, ideally with the ability to spin up LXCs to run them on first.
I've created unprivileged containers successfully using Ansible, however before being able to use docker on the LXC I need to physically change the features of the container e.g
keyctl =1
nesting =1
Is anyone aware of doing this through an Ansible role ?
see https://pve.proxmox.com/wiki/Linux_Container to have a line matching :
features: [fuse=<1|0>] [,keyctl=<1|0>] [,mount=<fstype;fstype;...>] [,nesting=<1|0>]
in your /etc/pve/lxc/VMID.conf
Anyone got a proper instruction set to upgrade Ansible Tower 3.4 to 3.6 ?
(Ansible 2.5, Database - postgres 9.6)
Found Ansible Doc but not in details.
Thanks
EDIT: The original question pertained to upgrading AWX. It's been edited and now pertains to upgrading Ansible Tower. My answer below only applies to upgrading AWX.
If you used the docker-compose installation method and pointed postgres_data_dir to a persistent directory on the host, upgrading AWX is straightforward. I deployed AWX 2.0.0 in 2018 and have upgraded it to every subsequent release (currently running 9.1.0) without issue. Below is my upgrade method which preserves all data including secrets between upgrades and does not rely on using the tower cli / awx cli tool.
AWX path assumptions:
Existing installation: /opt/awx
New release: /tmp/awx
AWX inventory file assumptions:
use_docker_compose=true
postgres_data_dir=/opt/postgres
docker_compose_dir=/var/lib/awx
Manual upgrade process:
Backup your AWX host before continuing! Consider backing up your postgres database as well.
Download the new release of AWX and unpack it to /tmp/awx
Ensure that the patch package is installed on the host.
Create a patch file containing the differences between the new and
existing inventory files:
diff -u /tmp/awx/installer/inventory /opt/awx/installer/inventory > /tmp/awx_inv_patch
Patch the new inventory file with the differences:
patch /tmp/awx/installer/inventory < /tmp/awx_inv_patch
Verify that the files now match:
diff -s /tmp/awx/installer/inventory /opt/awx/installer/inventory
Copy the new release directory over the existing one:
cp -Rp /tmp/awx/* /opt/awx/
Edit /var/lib/awx/docker-compose.yml and change the version numbers
after image: ansible/awx_web: and image: ansible/awx_task: to match the
new version of AWX that you're upgrading to.
Stop the current AWX containers:
cd /var/lib/awx
docker-compose stop
Run the installer:
cd /opt/awx/inventory
ansible-playbook -i inventory install.yml
AWX starts the upgrade process, which usually completes within a couple minutes. I'll typically monitor the upgrade progress with docker logs -f awx_web until I see RESULT 2 / OKREADY appear.
If everything is working as intended, I shut the containers down, pull and then recreate them using docker-compose:
cd /var/lib/awx
docker-compose stop
docker-compose pull && docker-compose up --force-recreate -d
If everything is still working as intended, I delete /tmp/awx and /tmp/awx_inv_patch.
Updgrades in AWX are not supported by ansible/redhat. Only the commercial Tower Licence allows to access scripts and procedures to do this.
From the awx project FAQ
Q: Can I upgrade from one version of AWX to another?
A: Direct in-place upgrades between AWX versions are not supported. It is possible to migrate data between different versions of AWX using the tower-cli tool. To migrate between different instances of AWX, please follow the instructions at https://github.com/ansible/awx/blob/devel/DATA_MIGRATION.md.
The reference link on github AWX project will teach you how to export your current data with tower-cli and reimport it in the new version you install. Note that all credentials are exported with blank secrets so you will have to update them with the passwords/secrets once imported.
I've have seen a gitlab demo about using Infrastructure as code. If I wanted to do the same on-prem, would this work? Setup open-source gitlab on-prem, setup a bash as shell runner, and using the shell runner, execute ansible playbooks on the network equipment?
having written out this answer, I now believe this question is at risk of closure for being either Too Broad or Primarily Opinion Based; but, I already spent the effort to type it out, so here we go.
Setup open-source gitlab on-prem, setup a bash as shell runner, and using the shell runner, execute ansible playbooks on the network equipment?
I believe that's possible, or you can install ansible, along with any required python modules for your playbooks, into a docker image and then use the docker executor to run the playbooks inside a container. Using Tower or AWX is also possible, since they have the concept of projects run from source control
The advantage of using the docker runner is that you don't have to pre-install ansible (along with its dependencies) on every runner host; the disadvantage of using the docker runner is that I could imagine ssh authentication from inside the container getting weird.
# hypothetical .gitlab-ci.yml
stages:
- apply
run ansible playbook:
stage: apply
image: docker.example.com/my-ansible:2.8
scripts:
- ansible-playbook -i ./some-inventory -v playbook.yml
The advantage of using a dedicated system like AWX (or Tower) is that the inventory against which those playbooks run is also a formally managed entity in the system, and wouldn't require teaching GitLab about how to make that available to your playbook. Same story with the authentication, since AWX has first-class support for a managed SSH keypair that can be conditionally granted to only certain playbook projects
You can still have GitLab integrate with Tower by either using tower-cli or their rich API to launch a Job Template that has its Project configured to do an SCM update before launch
Is it somehow possible to build images without having docker installed. On maven build of my project I'd like to produce docker image, but I don't want to force others to install docker on their machines.
I can think of some virtual box image with docker installed, but it is kind of heavy solution. Is there some way to build the image with some maven plugin only, some Go code or already prepared virtual box image for exactly this purpose?
It boils down to question how to use docker without forcing users to install anything. Either just for build or even for running docker images.
UPDATE
There are some, not really up to date, maven plugins for virtual machine provisioning with vagrant or with vbox. I have found article about building docker images without docker on basel
So far I see two options either I can somehow build the images only or run some VM with docker daemon inside(which can be used not only for builds, but even for integration tests)
We can create Docker image without Docker being installed.
Jib Maven and Gradle Plugins
Google has an open source tool called Jib that is relatively new, but
quite interesting for a number of reasons. Probably the most interesting
thing is that you don’t need docker to run it - it builds the image using
the same standard output as you get from docker build but doesn’t use
docker unless you ask it to - so it works in environments where docker is
not installed (not uncommon in build servers). You also don’t need a
Dockerfile (it would be ignored anyway), or anything in your pom.xml to
get an image built in Maven (Gradle would require you to at least install
the plugin in build.gradle).
Another interesting feature of Jib is that it is opinionated about
layers, and it optimizes them in a slightly different way than the multi-
layer Dockerfile created above. Just like in the fat jar, Jib separates
local application resources from dependencies, but it goes a step further
and also puts snapshot dependencies into a separate layer, since they are
more likely to change. There are configuration options for customizing the
layout further.
Pls refer this link https://cloud.google.com/blog/products/gcp/introducing-jib-build-java-docker-images-better
For example with Spring Boot refer https://spring.io/blog/2018/11/08/spring-boot-in-a-container
Have a look at the following tools:
Fabric8-maven-plugin - http://maven.fabric8.io/ - good maven integration, uses a remote docker (openshift) cluster for the builds.
Buildah - https://github.com/containers/buildah - builds without a docker daemon but does have other pre-requisites.
Fabric8-maven-plugin
The fabric8-maven-plugin brings your Java applications on to Kubernetes and OpenShift. It provides a tight integration into Maven and benefits from the build configuration already provided. This plugin focus on two tasks: Building Docker images and creating Kubernetes and OpenShift resource descriptors.
fabric8-maven-plugin seems particularly appropriate if you have a Kubernetes / Openshift cluster available. It uses the Openshift APIs to build and optionally deploy an image directly to your cluster.
I was able to build and deploy their zero-config spring-boot example extremely quickly, no Dockerfile necessary, just write your application code and it takes care of all the boilerplate.
Assuming you have the basic setup to connect to OpenShift from your desktop already, it will package up the project .jar in a container and start it on Openshift. The minimum maven configuration is to add the plugin to your pom.xml build/plugins section:
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>fabric8-maven-plugin</artifactId>
<version>3.5.41</version>
</plugin>
then build+deploy using
$ mvn fabric8:deploy
If you require more control and prefer to manage your own Dockerfile, it can handle this too, this is shown in samples/secret-config.
Buildah
Buildah is a tool that facilitates building Open Container Initiative (OCI) container images. The package provides a command line tool that can be used to:
create a working container, either from scratch or using an image as a starting point
create an image, either from a working container or via the instructions in a Dockerfile
images can be built in either the OCI image format or the traditional upstream docker image format
mount a working container's root filesystem for manipulation
unmount a working container's root filesystem
use the updated contents of a container's root filesystem as a filesystem layer to create a new image
delete a working container or an image
rename a local container
I don't want to force others to install docker on their machines.
If by "without Docker installed" you mean without having to install Docker locally on every machine running the build, you can leverage the Docker Engine API which allow you to call a Docker Daemon from a distant host.
The Docker Engine API is a RESTful API accessed by an HTTP client such
as wget or curl, or the HTTP library which is part of most modern
programming languages.
For example, the Fabric8 Docker Maven Plugin does just that using the DOCKER_HOST parameter. You'll need a recent Docker version and you'll have to configure at least one Docker Daemon properly so it can securely accept remote requests (there are lot of resources on this subject, such as the official doc, here or here). From then on, your Docker build can be done remotely without having to install Docker locally.
Google has released Kaniko for this purpose. It should be run as a container, whether in Kubernetes, Docker or gVisor.
I was running into the same problems, and I did not find any solution, thus i developed odagrun, it's a runner for Gitlab with integrated registry api, update DockerHub, Microbadger etc.
OpenSource and has a MIT license.
Ideal to create a docker image on the fly, without the need of a docker daemon nor the need of a root account, or any image at all (image: scratch will do), currrently still in development, but i use it every day.
Requirements
project repository on Gitlab
an openshift cluster (an openshift-online-starter will do for most medium/small
extract how the docker image for this project was created:
# create and push image to ImageStream:
build_rootfs:
image: centos
stage: build-image
dependencies:
- build
before_script:
- mkdir -pv rootfs
- cp -v output/oc-* rootfs/
- mkdir -pv rootfs/etc/pki/tls/certs
- mkdir -pv rootfs/bin-runner
- cp -v /etc/pki/tls/certs/ca-bundle.crt rootfs/etc/pki/tls/certs/ca-bundle.crt
- chmod -Rv 777 rootfs
tags:
- oc-runner-shared
script:
- registry_push --rootfs --name=test-$CI_PIPELINE_ID --ISR --config
This is an abstract question and I hope that I am able to describe this clear.
Basically; What is the workflow in distributing of source code to Kubernetes that is running in production. As you don't run Docker with -v in production, how do you update running pods.
In production:
Do you use SaltStack to update each container in each pod?
Or
Do you rebuild Docker images and restart every pod?
Locally:
With Vagrant you can share a local folder for source code. With Docker you can use -v, but if you have Kubernetes running locally how would you mirror production as close as possible?
If you use Vagrant with boot2docker, how can you combine this with Docker -v?
Short answer is that you shouldn't "distribute source code", you should rather "build and deploy". In terms of Docker and Kubernetes, you would build by means of building and uploading the container image to the registry and then perform a rolling update with Kubernetes.
It would probably help to take a look at the specific example script, but the gist is in the usage summary in current Kubernetes CLI:
kubecfg [OPTIONS] [-u <time>] [-image <image>] rollingupdate <controller>
If you intend to try things out in development, and are looking for instant code update, I'm not sure Kubernetes helps much there. It's been designed for production systems and shadow deploys are not a kind of things one does sanely.