Makefile for running fig - makefile

I try to create a make task which do fig up and install fig and docker in case when they don't installed. Problem which I try to address is a easy way to work with a project for newcomers.
I finished with something like this:
.PHONY: up
up:
command -v docker >/dev/null 2>&1 || {\
curl -sSL https://get.docker.com/ubuntu/ | sudo sh;\
};\
command -v fig >/dev/null 2>&1 || {\
curl -L https://github.com/docker/fig/releases/download/1.0.1/fig-`uname -s`-`uname -m` > /usr/local/bin/fig; chmod +x /usr/local/bin/fig;\
};
fig up;
and realized that it's not a simple task. Is there a community adopted way to install and run docker and fig with make?

I wouldn't use make for this at all.
Especially not when the commands that need to be run are so simple and single-use.
Just create a bootstrap.sh or similar script and tell people that they can run it if they need to.

I'm not going to argue if it's good practice or not, but I wrote a blog post about mixing make and fig.
http://www.byrnedo.com/2014/12/17/docker-fig-and-makefiles/
One advantage that has popped up is the fact that I can swap fig for another tool very simple. Which is relevant as that's now changing to docker-compose, so my interfacing scripts don't have to change. They still call make start or whatever when booting up a cluster.

Related

How to launch WSL as if I've logged in?

I have a WSL Ubuntu distro that I've set up so that when I login 4 services start working, including a web API that I can test via Swagger to verify it is up and working.
I'm at the point where what I want to do now is start WSL via a script - that is, launch my distro, have all of the services start, and do it from Python. The problem is I cannot even figure out the correct syntax to get WSL to start from PowerShell in a manner where my services start.
Side note: "services" != systemctl (or similar) calls, but just executing bash CLI commands from either my .bashrc or .profile at login.
I've put the commands to execute in .profile & .bashrc. I've configured it both for root execution and non-root user execution. I've taken the commands out of those 2 files and put it into a script in the Windows file system that I pass in on the start of wsl. And I've put that shell script in the WSL file system as well. Nothing seems to work, and sometimes the distro starts and then stops after about 30 seconds.
Some of the PS CLI commands I've tried:
Start-Job -ScriptBlock{ wsl -d distro -u root }
Start-Job -ScriptBlock{ wsl -d distro -u root 'bash -i -l -c /root/bin/start.sh'
Start-Job -ScriptBlock{ wsl -d distro -u root 'bash -i -l -c .\start.sh'
wsl -d distro -u root -- bash -i -l -c /root/bin/start.sh
wsl -d distro -u root -- bash -i -l -c .\start.sh
wsl -d distro -u root -- /root/bin/start.sh
Permutations of the above that I've tried: replace root with my default login, and turning all of the Start-Job bash options into a comma-separated list of single-quoted strings (Ex: 'bash', '-i', '-l', ... ). Nothing I launch from the CLI will allow me access to the web API that is supposed to be hosted on my distro.
Any advice on what to try next?
Not necessarily an answer here as much as troubleshooting tips which will hopefully lead to an answer:
First, most of the forms that you are using seem to be correct. The only ones that absolutely shouldn't work are those that attempt to run the script from the Windows filesystem.
Make sure that you have a shebang line starting your script. I'm assuming you do, but other readers may come across this as well. For the moment, try this form:
#!/usr/bin/env -S bash -li
That's going to have the same effect as the bash -li you tried -- It will source both both interactive startup files such as ~/.bashrc as well as login profiles such as ~/.bash_profile (and /etc/profile.d/*, etc.).
Note that preferably, you won't need the -li. Best practice would be to move anything necessary for the services over from the startup scripts to your start.sh script, and avoid parsing the profile and rc. I need to go update some of my answers, since I just realized I've been guilty of giving some potentially bad advice ...
Specifically, though, I'm wondering if your interactive Bash config has something truly, well, "interactive" in it that might be preventing the automatic running of the script itself. Again, best practice would be for ~/.bashrc to only hold configuration that is needed for interactive shell sessions.
Make sure the script is set as executable (chmod +x start.sh). Again, I'm assuming this is the case for you.
With a shebang line and an executable script, use something like:
wsl -d distro -u root -e /root/bin/start.sh
The -e tells WSL to launch the script directly. Since it has a shebang line, it will be parsed by Bash. Most of the other forms you use above actually run Bash twice - Once when launching WSL and another when it finds the shebang line in the script.
Try some basic troubleshooting for your script like:
Add set -x to the top (right under the shebang line) to turn on script debugging.
Add a ps -efH at the end to show the processes that are running when the script completes
If needed, resort to quick-and-dirty echo statements to show where things have progressed in the script.
I'm hopeful that the above will at least show you the problem, but if not, add the debugging info that you gain from this to your question, and we can troubleshoot further.

Unable to get any Docker Entrypoint from script working without continuous restarts

I'm having trouble understanding or seeing some working version of using a bash script as an Entrypoint for a Docker container. I've been trying numerous things for about 5 hours now.
Even from this official Docker blog, using a bash-script as an entry-point still doesn't work.
Dockerfile
FROM debian:stretch
COPY docker-entrypoint.sh /usr/local/bin/
RUN ln -s /usr/local/bin/docker-entrypoint.sh / # backwards compat
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["postgres"]
docker-entrypoint.sh
#!/bin/bash
set -e
if [ "$1" = 'postgres' ]; then
chown -R postgres "$PGDATA"
if [ -z "$(ls -A "$PGDATA")" ]; then
gosu postgres initdb
fi
exec gosu postgres "$#"
fi
exec "$#"
build.sh
docker build -t test .
run.sh
docker service create \
--name test \
test
Despite many efforts, I can't seem to get Dockerfile using an Entrypoint as a bash script that doesn't continuously restart and fail repeatedly.
My understanding is that exec "$#" was suppose to keep the container form immediately exiting, but I'm not sure if that's dependent some other process within the script failing.
I've tried using a docker-entrypoint.sh script that simply looked like this:
#!/bin/bash
exec "$#"
And since that also failed, I think that rules out something else going wrong inside the script being the cause of the failure.
What's also frustrating is there are no logs, either from docker service logs test or docker logs [container_id], and I can't seem to find anything useful in docker inspect [container_id].
I'm having trouble understanding everyone's confidence in exec "$#". I don't want to resort to using something like tail -f /dev/null or using a command at docker run. I was hoping that there would be some consistent, reliable way that a docker-entrypoint.sh script could reliably used to start services that I could run with docker run as well for other things for services, but even on Docker's official blog and countless questions here and blogs from other sites, I can't seem get a single example to work.
I would really appreciate some insight into what I'm missing here.
$# is just a string of the command line arguments. You are providing none, so it is executing a null string. That exits and will kill the docker. However, the exec command will always exit the running script - it destroys the current shell and starts a new one, it doesn't keep it running.
What I think you want to do is keep calling this script in kind of a recursive way. To actually have the script call itself, the line would be:
exec $0
$0 is the name of the bash file (or function name, if in a function). In this case it would be the name of your script.
Also, I am curious your desire not to use tail -f /dev/null? Creating a new shell over and over as fast as the script can go is not more performant. I am guessing you want this script to run over and over to just check your if condition.
In that case, a while(1) loop would probably work.
What you show, in principle, should work, and is one of the standard Docker patterns.
The interaction between ENTRYPOINT and CMD is pretty straightforward. If you provide both, then the main container process is whatever ENTRYPOINT (or docker run --entrypoint) specifies, and it is passed CMD (or the command at the end of docker run) as arguments. In this context, ending an entrypoint script with exec "$#" just means "replace me with the CMD as the main container process".
So, the pattern here is
Do some initial setup, like chowning a possibly-external data directory; then
exec "$#" to run whatever was passed as the command.
In your example there are a couple of things worth checking; it won't run as shown.
Whatever you provide as the ENTRYPOINT needs to obey the usual rules for executable commands: if it's a bare command, it must be in $PATH; it must have the executable bit set in its file permissions; if it's a script, its interpreter must also exist; if it's a binary, it must be statically linked or all of its shared library dependencies must be in the image already. For your script you might need to make it executable if it isn't already
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
The other thing with this setup is that (definitionally) if the ENTRYPOINT exits, the whole container exits, and the Bourne shell set -e directive tells the script to exit on any error. In the artifacts in the question, gosu isn't a standard part of the debian base image, so your entrypoint will fail (and your container will exit) trying to run that command. (That won't affect the very simple case though.)
Finally, if you run into trouble running a container under an orchestration system like Docker Swarm or Kubernetes, one of your first steps should be to run the same container, locally, in the foreground: use docker run without the -d option and see what it prints out. For example:
% docker build .
% docker run --rm c5fb7da1c7c1
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"docker-entrypoint.sh\": executable file not found in $PATH": unknown.
ERRO[0000] error waiting for container: context canceled
% chmod +x docker-entrypoint.sh
% docker build .
% docker run --rm f5a239f2758d
/usr/local/bin/docker-entrypoint.sh: line 3: exec: postgres: not found
(Using the Dockerfile and short docker-entrypoint.sh from the question, and using the final image ID from docker build . in those docker run commands.)

udev rule not working correctly, probably escaping issue

I try to run an udev rule once a mount is ready on a Vagrant box:
SUBSYSTEM=="bdi",ACTION=="add",RUN+="/usr/bin/screen -m -d bash -c 'sleep 5; cd /vagrant/; sudo -E su -c "pm2 start daemon.json" vagrant;'"
But the command isn't running properly, since the pm2 doesn't start.
When I execute /usr/bin/screen -m -d bash -c 'sleep 5; cd /vagrant/; sudo -E su -c "pm2 start daemon.json" vagrant;' manually it does work.
Any ideas?
The nested quotes are surely part of the problem, but the bigger problem is written in the udev manual:
This can only be used for very short-running foreground tasks. Running an event process for a long period of time may block all further events for this or a dependent device. Starting daemons or other long-running processes is not appropriate for udev; the forked processes, detached or not, will be unconditionally killed after the event handling has finished.
So your approach has to be changed. However, let’s suppose the command pm2 start daemon.json is appropriately short-running: your question is interesting anyway, because similar quote-nesting problems arise often. So please consider the rest of this answer as an example for the general case.
Instead of going mad with the correct escaping sequences, you can just write
RUN+="/usr/bin/screen -m -d bash -c 'sleep 5; cd /vagrant/; sudo -E -u vagrant pm2 start daemon.json"
An even simpler solution might be
RUN+="/usr/bin/screen -m -d /usr/local/bin/start_vagrant_daemon"
where /usr/local/bin/start_vagrant_daemon is executable and has the following content
#!/bin/bash
sleep 5
cd /vagrant/
sudo -E -u vagrant pm2 start daemon.json
Both solutions require setting up the correct sudo authorizations by editing /etc/sudoers or (better) writing them in a new file /etc/sudoers.d/vagrant_daemon after enabling includedir /etc/sudoers.d in /etc/sudoers.

convert Dockerfile to Bash script

Is there any easy way to convert a Dockerfile to a Bash script in order to install all the software on a real OS? The reason is that docker container I can not change and I would like afterwards change few things if they did not work out.
In short - no.
By parsing the Dockerfile with a tool such as dockerfile-parse you could run the individual RUN commands, but this would not replicate the Dockerfile's output.
You would have to be running the same version of the same OS.
The ADD and COPY commands affect the filesystem, which is in its own namespace. Running these outside of the container could potentially break your host system. Your host will also have files in places that the container image would not.
VOLUME mounts will also affect the filesytem.
The FROM image (which may in turn be descended from other images) may have other applications installed.
Writing Dockerfiles can be a slow process if there is a large installation or download step. To mitigate that, try adding new packages as a new RUN command (to take advantage of the cache) and add features incrementally, only optimising/compressing the layers when the functionality is complete.
You may also want to use something like ServerSpec to get a TDD approach to your container images and prevent regressions during development.
Best practice docs here, gotchas and the original article.
Basically you can make a copy of a Docker container's file system using “docker export”, which you can then write to a loop device:
docker build -t <YOUR-IMAGE> ...
docker create --name=<YOUR-CONTAINER> <YOUR-IMAGE>
dd if=/dev/zero of=disk.img bs=1 count=0 seek=1G
mkfs.ext2 -F disk.img
sudo mount -o loop disk.img /mnt
docker export <YOUR-CONTAINER> | sudo tar x -C /mnt
sudo umount /mnt
Convert a Docker container to a raw file system image.
More info here:
http://mr.gy/blog/build-vm-image-with-docker.html
You can of course convert a Dockerfile to bash script commands. Its just a matter of determining what the translation means. All docker installs, apply changes to a "file system layer" and that means all changes can be implemented in a real OS.
An example of this process is here:
https://github.com/thatkevin/dockerfile-to-shell-script
It is an example of how you would do the translation.
you can install application inside dockerfile like this
FROM <base>
RUN apt-get update -y
RUN apt-get install <some application> -y

How to run cloud-init manually?

I'm writing a CloudFormation template and I'm trying to debug the user-data script I provide in the template. How can I run the cloud-init manually and make it perform the same actions it does when starting a new instance?
You can just run it like this:
/usr/bin/cloud-init -d init
This runs the cloud init setup with the initial modules. (The -d option is for debug) If want to run all the modules you have to run:
/usr/bin/cloud-init -d modules
Keep in mind that the second time you run these it doesn't do much since it has already run at boot time. To force to run after boot time you can run from the command line:
( cd /var/lib/cloud/ && sudo rm -rf * )
In older versions the equivalent of cloud-init init is:
/usr/bin/cloud-init start
You may also find this question useful although it applies to the older versions of cloud-init: How do I make cloud-init startup scripts run every time my EC2 instance boots?
The documentation for cloud init here just gives you examples. But it doesn't explain the command line options or each one of the modules, so you have to play around with different values in the config to get your desired results. Of course you can also look at the code.
rm -f /var/log/cloud-init.log \
&& rm -Rf /var/lib/cloud/* \
&& cloud-init -d init \
&& cloud-init -d modules --mode final
Kudus to #Rico, and also, if you want to run a single module - either for testing or because your distro doesn't enable a module by default (hi Precise!), you can
/usr/bin/cloud-init -d single -n <module-name>
For example when my distro doesn't run write_files by default (like a lot of old distros), I use this at the top of runcmd:
runcmd:
- /usr/bin/cloud-init -d single -n write-files
[I know its not really an answer to the OP, but when looking to solve my problem this question was one of the top results, so I figure other people might find this useful]
As documentation, you can simply run
sudo cloud-init clean
and add --logs to clean all log file.
It will redo everything when you reboot
On most Linux distros (including CentOS and Ubuntu), you can restart the cloud-init service using systemctl:
systemctl restart cloud-init
And then check the output of the journal to see the results:
journalctl -f -u cloud-init
On Amazon Linux 2, we figured out that cloud-init is run after initial launch and then removed. This caused a problem when we built custom AMIs with Packer and then wanted to launch them with user-data scripts. Here is the Packer shell provisioner (HCL2 format) we use at the end of a build to reset cloud-init:
provisioner "shell" {
inline = [
"echo 'Waiting for cloud-init'; while [ ! -f /var/lib/cloud/instance/boot-finished ]; do sleep 1; done; echo 'Done'",
"sudo yum install cloud-init -y",
"sudo cloud-init clean",
]
}
AMIs built with templates that have this will launch with cloud-init support.

Resources