I'm writing a script that checks something about a Docker container; which container it is depends on user input. So I have code like this:
(define pipes (process (string-append "docker inspect " name)))
To get the result of calling docker inspect $name in the shell. How can I protect this from code injection? Someone could enter someName ; sudo rm -rf / --no-preserve-root and the result wouldn't be nice. I could make it so that it has the effect of docker inspect "$name" or even put it between single quotes but in both cases, someone could enter someName" or someName' instead and the problem is back.
Use the process* function, which takes separate arguments for the command name and its arguments, bypassing the shell to run the program.
(process* "docker" "inspect" name)
Related
I have a .conf file that I am calling in my main script, which contains a case statement.
In this file, I have a series of ssh commands that I am combining into a single variable/array/function (I've tried multiple methods) to be executed when the condition is met in my case statement. For context, this is a script to auto-shutdown clients.
~/variables.conf
#!/bin/sh
CLIENT_1="ssh admin#1.1.1.1 shutdown -r now"
CLIENT_2="ssh admin#1.1.1.2 shutdown -r now"
CLIENT_ALL() { $CLIENT_1 ; $CLIENT_2 ; }
#also tried with similar results
#CLIENT_ALL="$CLIENT_1; $CLIENT_2"
#CLIENT_ALL=($CLIENT_1 $CLIENT_2)
To make sure this portion of code is working and the variables are passing, I run a test.sh and execute from CLI.
~/variables.test.sh
#!/bin/sh
. ~/variables.conf
CLIENT_ALL
Great, everything works. My two clients restart successfully - ssh keys stored so no prompt to enter password.
But when this is called from my case statement, things go wrong:
~/script.sh
#!/bin/sh
. ~/variables.conf
case $1 in
trigger1)
logger <message> #this is working fine
printf <message> | msmtp <email> #this is working fine
CLIENT_ALL
;;
*)
logger "Unrecognized command: $1"
;;
esac
What happens when this triggers: it logs, it sends an email but only the first client gets the ssh command to reboot. It passes the first variable $CLIENT_1 and then stops. I've tried a variety of ways to define and package the ssh commands, as well as a variety of ways to call them in the case statement, but always with the same results. I am certain that there is something about case statement rules/logic that I am overlooking that will explain this behavior and a correct way to make this work.
For my use-case, I need to use a case statement. My goal is to have a single command in the case statement so that the main script doesn't have to be modified - only the .conf needs to be updated if clients are added/removed.
Any help would be greatly appreciated.
Inb4 anyone saying this is a bad idea, it's actually a reasonable approach for things like this.
I'm writing this Docker container that can execute user provided commands through contained OpenVPN connection in Docker, e.g. docker run vpntunnel curl example.com.
So the ENTRYPOINT of the image will fire up OpenVPN, after the VPN tunnel is up, execute the user provided CMD line.
Problem is, the standard way to run commands after OpenVPN is up is through the --up option of OpenVPN. Here is the man page description of this option:
--up cmd
Run command cmd after successful TUN/TAP device open (pre --user UID change).
cmd consists of a path to script (or executable program), optionally followed
by arguments. The path and arguments may be single- or double-quoted and/or
escaped using a backslash, and should be separated by one or more spaces.
So the reasonable approach here is for ENTRYPOINT script to correctly escape the user provided CMD line and pass the whole thing as one parameter to --up option of OpenVPN.
In case my Docker image needs to perform some initializations after the tunnel is up and before the user command line is executed, I can prepend a script before the user provided CMD line like this: --up 'tunnel-up.sh CMD...' and in the last line of tunnel-up.sh use "$#" to execute user provided arguments.
Now as you may guess, the only problem left is how to correctly escape an entire command line to be able to passed as a single argument.
The naive approach is just --up "tunnel-up.sh $#" but it surely can't distinguish command lines between a b c and "a b" c.
In bash 4.4+ you can use parameter transformation with # to quote values:
--up "tunnel-up.sh ${*#Q}"
In prior versions you could use printf '%q' to achieve the same effect:
--up "tunnel-up.sh $((($#)) && printf '%q ' "$#")"
(The (($#)) check makes sure there are parameters to print before calling printf.)
I'm writing a Bourne shell deployment script, which runs some commands as root and some as the current user. I want to not run all commands as root, and check upfront if the commands I'll need are available to root (to prevent aborted half-done deployments).
In order to do this, I want to make a function that checks if a command can be run as root. My idea was to do this:
sudo_command() {
sudo sh -c 'type "$1"'
}
And then to use it like so:
required_sudo_commands="cp rm apt"
for command in $required_sudo_commands do
sudo_command "$command" || (
echo "missing required command: $command;
exit 1;
)
done
As you might guess by my question here: it doesn't work. Does any of you see what I'm doing wrong here?
I tried running the command inside sudo_command by itself, but that miraculously (to me) did work. But when I put the command into a separate file, it didn't work.
There are two immediate problems:
The $1 not expanding in single quotes.
You can semi-fix this by expanding it in double quotes instead: sudo sh -c "type '$1'"
Your command not exiting. That's easily fixed by replacing your || (..) with || {..}.
(..) creates a subshell that limits the scope of everything inside it including exit. To group commands, use {..}
However, there is also the fundamental problem of trying to use sh -c 'type "$1" to do anything.
One of the major points of sudo is the ability to limit what a user can and can't do. You're assuming that a user has complete, unrestricted access to run arbitrary commands as root, and that any problems are due to root not having these commands available.
That may be a valid assumption for you, but you may want to instead run e.g. sudo apt --version to get a better (but still incomplete) picture of whether you're allowed and able to run apt with sudo without requiring complete and unrestricted access.
I have two services A and B.
A sets a value in etcd as it's being started, say the public IP address which it gets from an environment file:
ExecStartPost=/usr/bin/etcdctl set /A_ADDR $COREOS_PUBLIC_IPV4
B needs that value as it starts up, as well as its own IP address. So something like this would be nice:
ExecStart=/usr/bin/docker run -e MY_ADDR=$COREOS_PUBLIC_IPV4 -e A_ADDR=$ETCD_A_ADDR mikedewar/B
but that's obviously not possible as etcd variables don't present as systemd environment variables like that. Instead I can do some sort of /usr/bin/bash -c 'run stuff' in my ExecStart but it's awkward especially as I need systemd to expand $COREOS_PUBLIC_IPV4 and my new bash shell to expand $(etcdctl get /A_ADDR). It also reeks of code smell and makes me think I'm missing something important.
Can someone tell me the "right" way of getting values from etcd into my ExecStart declaration?
-- update
So I'm up and running with
ExecStart=/usr/bin/bash -c 'source /etc/environment && /usr/bin/docker run -e A_ADDR=$(/usr/bin/etcdctl get /A_ADDR) -e MY_ADDR=$COREOS_PUBLIC_IPV4 mikedewar/B'
but it's pretty ugly. Still can't believe I'm not missing something..
I've was struggling with the same thing until recently. After reading much of the documentation of CoreOS and systemd, here is a slightly 'cleaner' version of what you're doing:
[Service]
EnvironmentFile=/etc/environment
ExecStart=/bin/sh -c '/usr/bin/docker run -e A_ADDR=$(/usr/bin/etcdctl get /A_ADDR) -e MY_ADDR=$COREOS_PUBLIC_IPV4 mikedewar/B'
Additionally, I have adopted a pattern where my services depend on a systemd 'oneshot' service that will compute some value and write it in to /etc/environment. This allows you to keep more complex shell scripting out of the main service unit and place it into it's own oneshot service unit.
Here are the docs for EnvironmentFile: http://www.freedesktop.org/software/systemd/man/systemd.exec.html#EnvironmentFile=
Finally, a quick gotchya: you must use a shell invocation if you use any variable in your ExecStart/Stop commands. systemd does no shell invocation when executing the command you provide, so variables will not be expanded.
I am currently using such a workaround:
I've created scripts which extracts data from particular etcd directory
#! /bin/sh
for entry in `etcdctl ls /my_dir --recursive` ; do
echo ' -e '`grep -o '[^/]*$' <<< ${entry}`=`etcdctl get ${entry}`
done
its output looks following:
-e DATABASE_URL=postgres://m:m#mi.cf.us-.rds.amazonaws.com:5432/m
-e WEB_CONCURRENCY=4
So then eventually I can in my init file place that in such way
/bin/sh -c '/usr/bin/docker run -p 9000:9000 $(/home/core/envs.sh) me/myapp -D FOREGROUND'
It's not the most elegant way, and I'd love to know how to improve it, but placing that for loop as a one-liner requires lots of escaping.
Can you container read directly from etcd as it starts, over the docker0 bridge IP, instead of passing in the values? This will also allow you to do more complex logic on the response, parse JSON if you are storing it as the etcd value, etc.
Forgive me if this is something I've just completely missed, however, I have a remote server (a NAS) that I'd like to start a command running on, while I do some work locally. Now, I believe I could probably do this with a command like:
ssh foo#bar 'cp -Rl /foo/bar /bar/foo'
However, I need a return value in my main script from part of the command, so I need it to return but leave the cp command running. For example:
foo=$(ssh foo#bar <<- REMOTE_COMMANDS
cp -Rl /foo/bar /bar/foo &
echo "foobar"
REMOTE_COMMANDS)
However I don't believe this returns until the cp command has completed, but if I use exit I think the cp is interrupted?
Is there another way to leave cp running, or will I need to run two ssh commands (one for the cp, one to get the return value I need?)
You can use one the following choices :
tmux
nohup
screen
tmux & screen are some complete environments that can be attached and detached for 1 to N users.
If you need something straightforward, look nohup first.
You can use screen command.
Simply create a new screen using : screen -R screen_name.
Run your command or code and then exit that screen by pressing ctrl + a + d.
If you want to switch back to the screen, enter this command : screen -r screen_name.
Hope it helps.