Docker run script in host on docker-compose up - bash

My question relates to best practices on how to run a script on a docker-compose up directive.
Currently I'm sharing a volume between host and container to allow for the script changes to be visible to both host and container.
Similar to a watching script polling for changes on configuration file. The script has to act on host on changes according to predefined rules.
How could I start this script on a docker-compose up directive or even from the Dockerfile of the service, so that whenever the container goes up the "watcher" can find any changes being made and writing to.
The container in question will always run over a Debian / Ubuntu OS and should be architecture independent, meaning it should be able to run on ARM as well.
I wish to run a script on the Host, not inside the container. I need the Host to change its network interface configurations to easily adapt any environment The HOST needs to change I repeat.. This should be seamless to the user, and easily editable on a Web interface running Inside a CONTAINER to adapt to new environments.
I currently do this with a script running on the host based on crontab. I just wish to know the best practices and examples of how to run a script on HOST from INSIDE a CONTAINER, so that the deploy can be as easy for the installing operator to just run docker-compose up.

I just wish to know the best practices and examples of how to run a script on HOST from INSIDE a CONTAINER, so that the deploy can be as easy for the installing operator to just run docker-compose up
It seems that there is no best practice that can be applied to your case. A workaround proposed here: How to run shell script on host from docker container? is to use a client/server trick.
The host should run a small server (choose a port and specify a request type that you should be waiting for)
The container, after it starts, should send this request to that server
The host should then run the script / trigger the changes you want
This is something that might have serious security issues, so use at your own risk.

The script needs to run continuously in the foreground.
In your Dockerfile use the CMD directive and define the script as the parameter.
When using the cli, use docker run -d IMAGE SCRIPT

You can create an alias for docker-compose up. Put something like this in ~/.bash_aliases (in Ubuntu):
alias up="docker-compose up; ~/your_script.sh"
I'm not sure if running scripts on the host from a container is possible, but if it's possible, it's a severe security flaw. Containers should be isolated, that's the point of using containers.

Related

Can we have more than one installation of Rundeck in a Linux server?

I have one installation of Rundeck in a Linux server and it is up & running on port 4440. But I want to have one more installation of it and expecting it to run on other port. Is it possible? This question may look weird but I want to have additional setup of Rundeck due to personal reasons.
Eagerly looking for help. Thanks in advance.
You can test your "Personal instance" with a docker container without touch the "real instance" (or use two docker containers if you want), in both cases, you need to specify different ports (for example 4440 for "real" instance/container and 5550 to "test" container).
Here you have the official docker image, here about how to run, check the "Environment variables" section to specify the TCP port of each container (also, you have a lot of params to test).
And here you have a lot of configurations to test (LDAP, DB backends, etc..).
If you use Rundeck with Docker you must change the init.sh.
He is responsible of configuration overwrite at each container creation, so all your configuration updates are lost.
Doing this also avoid to have clear configuration params i' your docker-compose file...
The steps are :
create docker-compose file as mentioned on Rundeck Docker hub
map volumes on your host so you can save rundeck's files and directory
stop your container
comment config overwrite in init.sh
restart your container
You can then update rundeck's config on the fly and just restart rundeck container to see the changes...

How to copy from HOST to CONTAINER while within container

I am sorry for taking up your time.
I have a local docker setup and I want to copy files from my local host to my container.
But the thing is that I need a command that I can use WHILE i am inside the container.
To explain the situation further: I executed "docker exec -it CONTAINERNAME bash" to enter my container,
and now I am on /var/www/html
and I need to find a way to copy a file/folder from my local environment into that container.
Reason: I am currently writing a dockerfile which automates the process of setting things up. I need that very specific command because a Dockerfile RUN-command can only be executed while inside the container.
What I tried:
"docker cp" is a good command to use when I am outside the container but it doesn't work while in the container.
"DOCKERFILE COPY" might do the trick but I need a general shell command to double check if it really does what it is supposed to do. I must be able to reproduce the same process of my Dockerfile via manually executing the commands one by one.
Once again, I apologize for my inability to solve this problem by myself. My inexperience has caused me nothing but trouble.
Edit: I am using a Win10 64bit OS with dual monitor setup and a lefthanded mouse. My keyboard, albeit old, should possess all the necessary keys to replicate any essential keyboard-shortcuts if required. All my drivers are installed and updated.
When you build an image you need to put there everything you need for a normal work of your container. You shouldn't copy files from the host once your image is built. You might use volumes as a common storage for both the host and the container but I don't think this is your case.
Until this is not totally clear what you do I'd suggest to prepare all the data you need and put it within docker context. Then build an image. You also may find docker-compose useful as, at least, it helps separately define the context and the path to your dockerfile if needed.

Adding dot config and debugging utilities to a docker instance

I've got a project where a Flask server is run as a docker service via docker-compose (other elements like other API servers, the DB, are modeled as separate services in Docker Compose).
In my dev flow there are times when it's useful for me to drop into a bash shell (via docker exec -it <container_id> bash) and do some debugging like poking around at the files in there, take some logs and write some quick scripts to do some transformations on them, etc. In these scenarios I find it would be useful to have things like my bashrc, bash_profile, and various scripts which I find useful to do this sort of thing inside the docker container.
Is there an easy way to package these things and inject them into a (running) container? I'd prefer to not have these various debug things in the main Dockerfile which is shared.
You could make a Dockerfile.debug which uses the actual Dockerfile-image as base. Then grab your bash files into that.
Alternatively, locate the relevant container directory in /var/lib/docker and just put the files there (on the host). A trick to find the correct onion slice is to exec into the container, do a touch hello.txt, and then just find that file on the host.

Can I use a DockerFile as a script?

We would like to leverage the excellent catalogue of DockerFiles on DockerHub, but the team is not in a position to use Docker.
Is there any way to run a DockerFile as if it were a shell script against a machine?
For example, if I chose to run the Docker container ruby:2.4.1-jessie against a server running only Debian Jessie, I'd expect it to ignore the FROM directive but be able to set the environment from ENV and run the RUN commands from this Dockerfile: Github docker-library/ruby:2.4.1-jessie
A dockerfile assumes to be executed in an empty container or an image on which it builds (using FROM). The knowledge about the environment (specifically the file system and all the installed software) is important and running something similar outside of docker might have side effects because files are at places where no files are expected.
I wouldn't recomend it

Running docker as non-root user OR running jenkins on tomcat as root user

I am trying to build a docker image using docker-maven plugin, and plan to execute the mvn command using jenkins. I have jenkins.war deployed on a tomcat instance instead of a standalone app, which runs as a non-root user.
The problem is that docker needs to be run as root user, so maven commands need to be run as root user, and hence jenkins/tomcat needs to run as root user which is not a good practice (although my non-root-user is also sudoer so I guess won't matter much).
So bottom line, I see two solutions : Either run docker as non-root user (and need help on how to do that)
OR
Need to run jenkins as root (And not sure how to achieve that as I changed environment variable /config and still its not switching to root).
Any advice on which solution to choose and how to implement it ?
The problem is that docker needs to be run as root user, so maven commands need to be run as root user,
No, a docker run can be done with a -u (--user) parameter in order to use a non-root user inside the container.
Either run docker as non-root user
Your user (on the host) needs to be part of the docker group. Then you can run the docker service with that user.
As commented, this is not very secure.
See:
"chrisfosterelli/dockerrootplease"
"Understanding how uid and gid work in Docker containers"
That last links ends with the following findings:
If there’s a known uid that the process inside the container is executing as, it could be as simple as restricting access to the host system so that the uid from the container has limited access.
The better solution is to start containers with a known uid using the--user (you can use a username also, but remember that it’s just a friendlier way of providing a uid from the host’s username system), and then limiting access to the uid on the host that you’ve decided the container will run as.
Because of how uids and usernames (and gids and group names) map from a container to the host, specifying the user that a containerized process runs as can make the process appear to be owned by different users inside vs outside the container.
Regarding that last point, you now have user namespace (userns) remapping (since docker 1.10, but I would advice 17.06, because of issue 33844).
I am also stuck on how to setup a docker build server.
Here's where I see ground truth right now...
Docker commands require root privileges
This is because if can run arbitrary docker commands, you have the same powers as root on the host. (You can build a container runnings as root internally, with a filesystem mount to anywhere on the host, thus allowing any root action.)
The "docker" group is a big lie IMHO. It's effectively the same as making the members root.
The only way I can see to wrap docker with any kind of security for non-root use is to build custom bash scripts to launch very specific docker commands, then to carefully audit the security implications of those commands, then add those scripts to the sudoers file (granting passwordless sudo to non-root users).
In the world where we integrate docker into development pipelines (e.g. putting docker commands in Maven builds or allow developers to make arbitrary changes to build definitions for a docker build server), I have idea how you maintain any security.
From a lot of searching and research debugging this issue in the the last week.
I found to run a maven docker container as non root would be to pass the user flag
eg -u 1000
But for this to work correctly the user needs to be in the /passwd directory of the image
to work around this you can add the host (Jenkins) /etc/passwd directory to the docker image and use a non root user.
From your system commmand arguments on the docker run container add the following to mount the correct volumes to the mvn image to allow the host non root user to get mapped inside the maven container.
-v /share:/share -v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro -v "$HOME/.m2":/var/maven/.m2:z -w /usr/src/mymaven -e MAVEN_CONFIG=/var/maven/.m2 -e MAVEN_OPTS="-Duser.home=/var/maven"
I know this might not be the most informative answer but it should work to run a mvn container as non root specifically to run otj-embedded-pg for integration tests that pass fine locally but fail on a Jenkins server.
See this link OTJ_EMBEDDED_RUN_IN_CI_SERVER
As most of the posters on that thread suggest creating a new image there is no need to do that and you can run the latest maven docker image with the commands listed above and it works as it should
Hope this helps somebody that might get stuck on this issue and save them a few hours work.

Resources