Condition on Script Status? - systemd

I have a SystemD unit which formats a filesystem in the cloud.
According to the systemd unit documentation, there exist many different Condition* arguments that units accept to determine whether they should run or not. I need to be able to determine if /dev/xvdb is formatted as ext4, and the only way I've found to do that is to do something like this:
if ! blkid -t TYPE=ext4 | grep xvdb >/dev/null; then
mkfs.ext4 /dev/xvdb
fi
While I can drop this bash script somewhere on the filesystem, it would seem more intuitive if SystemD could execute a script to determine whether a service should conditionally start.
Is there a workaround for this which doesn't involve dropping a file on the filesystem?
The only two choices I see are:
Drop a bash script on the filesystem that always returns 0 and only formats if necessary.
Using ExecStartPost, touch a file on the filesystem to serve as a tag and then ConditionPathExists=!/path.
Is there a way to have SystemD invoke a script to determine whether it should execute a unit?

With systemd v243 and later (post-#12933) you could use ExecCondition:
ExecCondition=sh -c '! blkid -t TYPE=ext4 | grep -q xvdb'
Be aware of how systemd interprets command lines and special executable prefixes, which can be useful (or surprising) in conditions.

Related

How to launch WSL as if I've logged in?

I have a WSL Ubuntu distro that I've set up so that when I login 4 services start working, including a web API that I can test via Swagger to verify it is up and working.
I'm at the point where what I want to do now is start WSL via a script - that is, launch my distro, have all of the services start, and do it from Python. The problem is I cannot even figure out the correct syntax to get WSL to start from PowerShell in a manner where my services start.
Side note: "services" != systemctl (or similar) calls, but just executing bash CLI commands from either my .bashrc or .profile at login.
I've put the commands to execute in .profile & .bashrc. I've configured it both for root execution and non-root user execution. I've taken the commands out of those 2 files and put it into a script in the Windows file system that I pass in on the start of wsl. And I've put that shell script in the WSL file system as well. Nothing seems to work, and sometimes the distro starts and then stops after about 30 seconds.
Some of the PS CLI commands I've tried:
Start-Job -ScriptBlock{ wsl -d distro -u root }
Start-Job -ScriptBlock{ wsl -d distro -u root 'bash -i -l -c /root/bin/start.sh'
Start-Job -ScriptBlock{ wsl -d distro -u root 'bash -i -l -c .\start.sh'
wsl -d distro -u root -- bash -i -l -c /root/bin/start.sh
wsl -d distro -u root -- bash -i -l -c .\start.sh
wsl -d distro -u root -- /root/bin/start.sh
Permutations of the above that I've tried: replace root with my default login, and turning all of the Start-Job bash options into a comma-separated list of single-quoted strings (Ex: 'bash', '-i', '-l', ... ). Nothing I launch from the CLI will allow me access to the web API that is supposed to be hosted on my distro.
Any advice on what to try next?
Not necessarily an answer here as much as troubleshooting tips which will hopefully lead to an answer:
First, most of the forms that you are using seem to be correct. The only ones that absolutely shouldn't work are those that attempt to run the script from the Windows filesystem.
Make sure that you have a shebang line starting your script. I'm assuming you do, but other readers may come across this as well. For the moment, try this form:
#!/usr/bin/env -S bash -li
That's going to have the same effect as the bash -li you tried -- It will source both both interactive startup files such as ~/.bashrc as well as login profiles such as ~/.bash_profile (and /etc/profile.d/*, etc.).
Note that preferably, you won't need the -li. Best practice would be to move anything necessary for the services over from the startup scripts to your start.sh script, and avoid parsing the profile and rc. I need to go update some of my answers, since I just realized I've been guilty of giving some potentially bad advice ...
Specifically, though, I'm wondering if your interactive Bash config has something truly, well, "interactive" in it that might be preventing the automatic running of the script itself. Again, best practice would be for ~/.bashrc to only hold configuration that is needed for interactive shell sessions.
Make sure the script is set as executable (chmod +x start.sh). Again, I'm assuming this is the case for you.
With a shebang line and an executable script, use something like:
wsl -d distro -u root -e /root/bin/start.sh
The -e tells WSL to launch the script directly. Since it has a shebang line, it will be parsed by Bash. Most of the other forms you use above actually run Bash twice - Once when launching WSL and another when it finds the shebang line in the script.
Try some basic troubleshooting for your script like:
Add set -x to the top (right under the shebang line) to turn on script debugging.
Add a ps -efH at the end to show the processes that are running when the script completes
If needed, resort to quick-and-dirty echo statements to show where things have progressed in the script.
I'm hopeful that the above will at least show you the problem, but if not, add the debugging info that you gain from this to your question, and we can troubleshoot further.

Unable to get any Docker Entrypoint from script working without continuous restarts

I'm having trouble understanding or seeing some working version of using a bash script as an Entrypoint for a Docker container. I've been trying numerous things for about 5 hours now.
Even from this official Docker blog, using a bash-script as an entry-point still doesn't work.
Dockerfile
FROM debian:stretch
COPY docker-entrypoint.sh /usr/local/bin/
RUN ln -s /usr/local/bin/docker-entrypoint.sh / # backwards compat
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["postgres"]
docker-entrypoint.sh
#!/bin/bash
set -e
if [ "$1" = 'postgres' ]; then
chown -R postgres "$PGDATA"
if [ -z "$(ls -A "$PGDATA")" ]; then
gosu postgres initdb
fi
exec gosu postgres "$#"
fi
exec "$#"
build.sh
docker build -t test .
run.sh
docker service create \
--name test \
test
Despite many efforts, I can't seem to get Dockerfile using an Entrypoint as a bash script that doesn't continuously restart and fail repeatedly.
My understanding is that exec "$#" was suppose to keep the container form immediately exiting, but I'm not sure if that's dependent some other process within the script failing.
I've tried using a docker-entrypoint.sh script that simply looked like this:
#!/bin/bash
exec "$#"
And since that also failed, I think that rules out something else going wrong inside the script being the cause of the failure.
What's also frustrating is there are no logs, either from docker service logs test or docker logs [container_id], and I can't seem to find anything useful in docker inspect [container_id].
I'm having trouble understanding everyone's confidence in exec "$#". I don't want to resort to using something like tail -f /dev/null or using a command at docker run. I was hoping that there would be some consistent, reliable way that a docker-entrypoint.sh script could reliably used to start services that I could run with docker run as well for other things for services, but even on Docker's official blog and countless questions here and blogs from other sites, I can't seem get a single example to work.
I would really appreciate some insight into what I'm missing here.
$# is just a string of the command line arguments. You are providing none, so it is executing a null string. That exits and will kill the docker. However, the exec command will always exit the running script - it destroys the current shell and starts a new one, it doesn't keep it running.
What I think you want to do is keep calling this script in kind of a recursive way. To actually have the script call itself, the line would be:
exec $0
$0 is the name of the bash file (or function name, if in a function). In this case it would be the name of your script.
Also, I am curious your desire not to use tail -f /dev/null? Creating a new shell over and over as fast as the script can go is not more performant. I am guessing you want this script to run over and over to just check your if condition.
In that case, a while(1) loop would probably work.
What you show, in principle, should work, and is one of the standard Docker patterns.
The interaction between ENTRYPOINT and CMD is pretty straightforward. If you provide both, then the main container process is whatever ENTRYPOINT (or docker run --entrypoint) specifies, and it is passed CMD (or the command at the end of docker run) as arguments. In this context, ending an entrypoint script with exec "$#" just means "replace me with the CMD as the main container process".
So, the pattern here is
Do some initial setup, like chowning a possibly-external data directory; then
exec "$#" to run whatever was passed as the command.
In your example there are a couple of things worth checking; it won't run as shown.
Whatever you provide as the ENTRYPOINT needs to obey the usual rules for executable commands: if it's a bare command, it must be in $PATH; it must have the executable bit set in its file permissions; if it's a script, its interpreter must also exist; if it's a binary, it must be statically linked or all of its shared library dependencies must be in the image already. For your script you might need to make it executable if it isn't already
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
The other thing with this setup is that (definitionally) if the ENTRYPOINT exits, the whole container exits, and the Bourne shell set -e directive tells the script to exit on any error. In the artifacts in the question, gosu isn't a standard part of the debian base image, so your entrypoint will fail (and your container will exit) trying to run that command. (That won't affect the very simple case though.)
Finally, if you run into trouble running a container under an orchestration system like Docker Swarm or Kubernetes, one of your first steps should be to run the same container, locally, in the foreground: use docker run without the -d option and see what it prints out. For example:
% docker build .
% docker run --rm c5fb7da1c7c1
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"docker-entrypoint.sh\": executable file not found in $PATH": unknown.
ERRO[0000] error waiting for container: context canceled
% chmod +x docker-entrypoint.sh
% docker build .
% docker run --rm f5a239f2758d
/usr/local/bin/docker-entrypoint.sh: line 3: exec: postgres: not found
(Using the Dockerfile and short docker-entrypoint.sh from the question, and using the final image ID from docker build . in those docker run commands.)

Running docker commands with bash variables

For some reason, running docker commands from bash scripts doesn't work if you add regular variables, example:
c=$(date +"%x")
targets="www.example.com"
docker build -t amass https://github.com/OWASP/Amass.git
docker run amass --passive -d $targets > $c.txt
The error is as follows:
./main.sh: 13: ./main.sh: cannot create 12/29/2018.txt: Directory nonexistent
Running same commands from a terminal operate directly. How can I fix this?
In your situation, it is too dangerous to use the %x option of date, which stands for:
%x locale's date representation (e.g., 12/31/99)
You wouldn't control anything, and may have various behaviour between your testing computer, and the docker, if the locale is different.
Anyway, using date format with slash '/', which are going to be interpreted as directory separator will lead to issue.
For both reasons, you should define the format of your date.
For instance:
#!/bin/bash
c=$(date +'%Y-%m-%d-%H-%M-%S')
targets="www.example.com"
docker build -t amass https://github.com/OWASP/Amass.git
docker run amass --passive -d $targets > $c.txt
You should add as many information (hour, minute, second ...) in your date as you think you may run your script; otherwise, the output of previous run will be overriden.

SVN filtering commits by messages

I have to accept or not accept the commit on a particular repository based on the comments with the commit (using hooks). I don't know how to do it. I have to do it on a Windows device. I read somewhere that I should modify the pre-commit.tmpl file to accept just that word as the commit so I did modify this statement:
SVNLOOK=/usr/local/bin/svnlook
$SVNLOOK log -t "$TXN" "$REPOS" | \
grep ""[a-zA-Z0-9]"" > /dev/null || exit 1
into this:
SVNLOOK=/usr/local/bin/svnlook
$SVNLOOK log -t "$TXN" "$REPOS" | \
grep "^.*hello.*$" > /dev/null || exit 1
Also, it says to change the .tmpl extension for windows. But I don't know if a grep search is right also, what is the other alternative to doing the same task?
The exampels inside the .tmpl files are made for unix and using unix commands. You need to install the appropriate unix tools and adapt the scripts to your architecture(modifying paths etc..)
On windows you also need to rename the file to .bat so it is executable.
Note that no environment variables are available in hook scripts.
I would recommend to use python as a platform independent way of providing hook scripts. There are tons of python hook scripts available.

Bash script to copy source catalog to two destination catalogs, verify and delete source if successful

OS: OSX mountain lion
I am trying to write a script that does the following
Check if file1 exist on destination 1 (bitcasa)
if exist then copy source folder to destination 1
if file does not exist find bitcasa process and kill it then wait 60sec then start bicasa.
try again (loop?) #bitcasa sometimes stops working and have to be restarted.
Check if file2 exist on destination 2 (nfs share)
if exist then copy source folder to destination 1
if file does not exist try to mount nfs share.
try again (loop?)
verify copied files
if files copied successfully delete source files
I only want the script to try a few times, if it ant ping the nas host it should give up and try the next time the script runs. I want to run the script every 2h. crontab seam to have been removed in mountain lion.
When I write this down I realize it is a bit more complicated than I first thought.
First regarding mount a nfs share, in OsX if you eject a mounted nfs share the folder in /Volumes gets removed. What is the best way to make sure a nfs share i always mounted if the nas is available? This might be handled outside the script?
If i manually mount the nfs share I will need to create /Volumes/media and this will result in that if I use the gui to mount the share will use /Volumes/media-1/ sins /Volumes/media vill already exist.
Regarding killing a process by name sins I cant know the PID, I tried with linux command i found:
kill ps -ef | grep bitcasa | grep -v grep | awk ‘{print $2}’ this did not work.
I have no idea how to check if all files were successfully copied, maybe rsync can take care of this?
I have started with this (not tested)
#check if bitcasa is running (if file exist)
if [ -f /Volumes/Bitcasa\ Infinite\ Drive/file.ext ]
then
rsync -avz /Users/username/source /Volumes/Bitcasa\ Infinite\ Drive/destination/
else
#Bitcasa might have stopped, check if process i running, kill if it is, then start bitcasa
fi
#Check if nfs share is mounted (if file exist)
if [ -f /Volumes/media/file.ext ]
then
rsync -avz /Users/username/source /Volumes/media/
fi
else
#nfs share (192.168.1.106:/media/) need to be mounted to /Volumes/media
I will do some more work on it myself but I know I will need help.
Or am I doing this way to complicated? maybe a backup program can do this?
For your kill ... ps problem, you can use killall, which kills all processes having a given name
killall bitcasa
or see man ps and use a user defined format, which simplifies the selection
ps -o pid,comm | awk '/bitcasa/ { print $1; }' | xargs kill
For the nas, if you can log into it and install rsync and ssh (or have it already installed), you don't need to mount anything. You can just give 192.168.1.106:/media/ as the destination to rsync and rsync will do everything necessary.
In any case, first check and mount if necessary and then start rsync when everything is set up properly not the other way round
if [ ! -f "/Volumes/Bitcasa Infinite Drive/file.ext" ]; then
# kill bitcasa, restart bitcasa
fi
rsync -avz /Users/username/source "/Volumes/Bitcasa Infinite Drive/destination/"
same for nas
if [ ! -f "/Volumes/media/file.ext" ]; then
# mount nas nfs share
fi
rsync -avz /Users/username/source "/Volumes/media/"
or if you have rsync and ssh on your nas, just
rsync -avz /Users/username/source 192.168.1.106:/media/

Resources