Environment variables apparently not being passed to a systemd service invocation - go

Here is the case:
I am writing a go program.
At some point, the program calls terragrunt cli, via os.Exec().
The program is run on a machine having systemd version 232.
Up till know, I have been invoking terragrunt with some env vars exposed (required by terragrunt as we will see below)
These env vars are passed to the login process by /etc/profile.d/terragruntvars as in
export TF_VAR_remote_state_bucket=my-bucket-name
So when I run in my terminal say terragrunt plan and by the appropriate interpolation in my tf / hcl files, I get something like (this is a debug level output, showing the actual terraform invocation terragrunt ends up performing)
terraform init -backend-config=my-bucket-name ...(more flags following)
My go program (invoking terragrunt cli via os.Exec()) runs perfectly via go run main.go
I decide to make this a systemd service as in
[Service]
ExecStart=/bin/sh -c myprogram
EnvironmentFile=/etc/myprogram/config
User=someuser
Group=somegroup
[Install]
WantedBy=multi-user.target
The program started failing miserably. By searching the root case I found out that the TF_VAR_* variables where never passed to the service when running, so the terraform command ended up being like
terraform init -backend-config=(this is empty, nothing here)
I thought that by explicitly invoking the service via bash, i.e. by making ExecStart=/bin/sh -c myprogram this would address the problem.
Here come the weird(est) parts.
Adding these vars to EnvironmentFile=/etc/myprogram/config did not have any effect in the terragrunt execution. When I say no effect, I mean the variables did become available to the service, however the command is still broken, i.e.
terraform init -backend-config=(this is empty, nothing here)
However, the TF_VAR_* variables ARE there. I added an os.Exec("env") in my program and it did print them.
This has been driving me nuts so any hint about what might be causing this would be highly appreciated.

Just like a shell will not pass it's process ENV VAR's on to child processes:
$ X=abc
$ bash -c 'echo $X' # prints nothing
unless you export the environment variable:
$ export X
$ bash -c 'echo $X' # abc
similarly with systemd and when using EnvironmentFile, to export environment variables, use PassEnvironment e.g.
PassEnvironment=VAR1 VAR2 VAR3
From the docs:
PassEnvironment=
Pass environment variables set for the system service manager to executed processes.
Takes a space-separated list of variable names...

Related

Get environment variables from pseudo-terminal

I have a program that uses https://github.com/creack/pty to create pseudo terminals. Given a command, it creates a file object where you can read and write and it will work as stdin and stdout.
I use a WebSocket to read & write commands to the program. I have configured it to run from the home directory of root user as well as current user based on my selection.
Initialise
cmd := exec.Command("/bin/sh", "env")
Set Command Execution path
cmd.Dir = "/var/root" // for system user
// (or)
cmd.Dir = "/Users/user_name" // for current user
Start the command with a pty.
ptmx, err := pty.Start(cmd) // ptmx is of type *os.File
This works fine but when I try to print environment variables, it will not show all respective environment variables for that particular user or root user.
Is there any way to get environment variables from pseudo-terminal for root user/current user?
The reason you're not getting the expected output is because env is not a shell script. At an interactive prompt try running the command your Go program is running. Here is what I see:
> /bin/sh env
/usr/bin/env: /usr/bin/env: cannot execute binary file
Try running /bin/sh -c env instead. However, since your command does not contain any shell syntax and does nothing more than execute the env binary you don't need /bin/sh at all. Just exec env.
Also, you seem to be under some misconceptions. There is no such thing as "user environment variables". Similarly, a pty does not have env vars. Environment variables are private to each process (and a pty is not a process).
It is true that interactive shells may set env vars by automatically reading various shell config files such as ~/.bashrc before showing the first prompt but I don't think that's what you're referring to since your /bin/sh -c env won't start an interactive shell; even though its stdin and stdout are attached to a pty.

Can /bin/bash -c export variable to parent shell

I'm trying to run arbitrary bash commands in the shell, but I can only access the shell buy running /bin/bash -c
Is there anyway of being able to run something like:
/bin/bash -c "export FOO=bar"
and then see FOO set in the original shell?
No.
This isn't shell-specific -- no UNIX process can change a parent process's environment variables without that parent process honoring an interface (for instance, reading new variables/values from stdout), or using unreliable and unsupportable hackery (like attaching to the parent process with a debugger and calling setenv() directly).
Consider ssh-agent as an example:
$ ssh-agent
SSH_AUTH_SOCK=/var/folders/t2/t58p1nwx1g38tkhykqfhvmm80000gn/T//ssh-0HSNi1V5h9wf/agent.17313; export SSH_AUTH_SOCK;
SSH_AGENT_PID=17314; export SSH_AGENT_PID;
echo Agent pid 17314;
...thus, documented for use with a pattern akin to:
$ eval "$(ssh-agent)"
In this case, that interface is eval-able shell code; however, as this is trivially used to execute arbitrary commands, supporting this interface is a security risk.
Inasmuch as your goal is to use the result of shell commands to modify the environment of a program that isn't a shell language at all, and thus doesn't support eval or source, this gives you the ability to use a safer stream format, such as a NUL-delimited stream. For instance, if your shell program writes key=val\0 pairs, with literal NUL characters for \0, you can do something akin to the following in Python:
for env_val in s.split('\0'):
if not env_val.contains('='): continue
k, v = env_val.split('=', 1)
environ[k] = v
...ported to your language of choice. To write in this format from shell:
printf '%s=%s\0' "$key" "$val"
...will suffice.

how do I get etcd values into my systemd service on coreOS?

I have two services A and B.
A sets a value in etcd as it's being started, say the public IP address which it gets from an environment file:
ExecStartPost=/usr/bin/etcdctl set /A_ADDR $COREOS_PUBLIC_IPV4
B needs that value as it starts up, as well as its own IP address. So something like this would be nice:
ExecStart=/usr/bin/docker run -e MY_ADDR=$COREOS_PUBLIC_IPV4 -e A_ADDR=$ETCD_A_ADDR mikedewar/B
but that's obviously not possible as etcd variables don't present as systemd environment variables like that. Instead I can do some sort of /usr/bin/bash -c 'run stuff' in my ExecStart but it's awkward especially as I need systemd to expand $COREOS_PUBLIC_IPV4 and my new bash shell to expand $(etcdctl get /A_ADDR). It also reeks of code smell and makes me think I'm missing something important.
Can someone tell me the "right" way of getting values from etcd into my ExecStart declaration?
-- update
So I'm up and running with
ExecStart=/usr/bin/bash -c 'source /etc/environment && /usr/bin/docker run -e A_ADDR=$(/usr/bin/etcdctl get /A_ADDR) -e MY_ADDR=$COREOS_PUBLIC_IPV4 mikedewar/B'
but it's pretty ugly. Still can't believe I'm not missing something..
I've was struggling with the same thing until recently. After reading much of the documentation of CoreOS and systemd, here is a slightly 'cleaner' version of what you're doing:
[Service]
EnvironmentFile=/etc/environment
ExecStart=/bin/sh -c '/usr/bin/docker run -e A_ADDR=$(/usr/bin/etcdctl get /A_ADDR) -e MY_ADDR=$COREOS_PUBLIC_IPV4 mikedewar/B'
Additionally, I have adopted a pattern where my services depend on a systemd 'oneshot' service that will compute some value and write it in to /etc/environment. This allows you to keep more complex shell scripting out of the main service unit and place it into it's own oneshot service unit.
Here are the docs for EnvironmentFile: http://www.freedesktop.org/software/systemd/man/systemd.exec.html#EnvironmentFile=
Finally, a quick gotchya: you must use a shell invocation if you use any variable in your ExecStart/Stop commands. systemd does no shell invocation when executing the command you provide, so variables will not be expanded.
I am currently using such a workaround:
I've created scripts which extracts data from particular etcd directory
#! /bin/sh
for entry in `etcdctl ls /my_dir --recursive` ; do
echo ' -e '`grep -o '[^/]*$' <<< ${entry}`=`etcdctl get ${entry}`
done
its output looks following:
-e DATABASE_URL=postgres://m:m#mi.cf.us-.rds.amazonaws.com:5432/m
-e WEB_CONCURRENCY=4
So then eventually I can in my init file place that in such way
/bin/sh -c '/usr/bin/docker run -p 9000:9000 $(/home/core/envs.sh) me/myapp -D FOREGROUND'
It's not the most elegant way, and I'd love to know how to improve it, but placing that for loop as a one-liner requires lots of escaping.
Can you container read directly from etcd as it starts, over the docker0 bridge IP, instead of passing in the values? This will also allow you to do more complex logic on the response, parse JSON if you are storing it as the etcd value, etc.

Want to export environment variable from startup script to other shells

I'm working on an embedded system using Busybox as the shell. My startup script rcS exports a number of variables:
UBOOT_ENV="gatewayip netmask netdev ipaddr ethaddr eth1addr hostname nfsaddr"
for i in $UBOOT_ENV; do
if [ -n "$i" ] ; then
export `fw_printenv $i`
fi
done
which are then available to scripts called from this script as I'd expect. What I want however is for these environment variables to be set in the environment for which some web server scripts are called. This is currently not the case. How do I make an environment variable available to any shell script called?
TY,
Fred
ps : my busybox is BusyBox v1.11.2 (2012-02-26 12:08:09 PST) built-in shell (msh)
Environment variables are only inherited by child processes of your script (and their child processes); you can't push them up to a parent process.
What you can do is write the variables to a file (as a shell script) which you can then include from wherever you like. Put source filename in /etc/.profile and it will probably do what you want.

Why does using set -e cause my script to fail when called in crontab?

I have a bash script that performs several file operations. When any user runs this script, it executes successfully and outputs a few lines of text but when I try to cron it there are problems. It seems to run (I see an entry in cron log showing it was kicked off) but nothing happens, it doesn't output anything and doesn't do any of its file operations. It also doesn't appear in the running processes anywhere so it appears to be exiting out immediately.
After some troubleshooting I found that removing "set -e" resolved the issue, it now runs from the system cron without a problem. So it works, but I'd rather have set -e enabled so the script exits if there is an error. Does anyone know why "set -e" is causing my script to exit?
Thanks for the help,
Ryan
With set -e, the script will stop at the first command which gives a non-zero exit status. This does not necessarily mean that you will see an error message.
Here is an example, using the false command which does nothing but exit with an error status.
Without set -e:
$ cat test.sh
#!/bin/sh
false
echo Hello
$ ./test.sh
Hello
$
But the same script with set -e exits without printing anything:
$ cat test2.sh
#!/bin/sh
set -e
false
echo Hello
$ ./test2.sh
$
Based on your observations, it sounds like your script is failing for some reason (presumably related to the different environment, as Jim Lewis suggested) before it generates any output.
To debug, add set -x to the top of the script (as well as set -e) to show commands as they are executed.
When your script runs under cron, the environment variables and path may be set differently than when the script is run directly by a user. Perhaps that's why it behaves differently?
To test this: create a new script that does nothing but printenv and echo $PATH.
Run this script manually, saving the output, then run it as a cron job, saving that output.
Compare the two environments. I am sure you will find differences...an interactive
login shell will have had its environment set up by sourcing a ".login", ".bash_profile",
or similar script (depending on the user's shell). This generally will not happen in a
cron job, which is usually the reason for a cron job behaving differently from running
the same script in a login shell.
To fix this: At the top of the script, either explicitly set the environment variables
and PATH to match the interactive environment, or source the user's ".bash_profile",
".login", or other setup script, depending on which shell they're using.

Resources