I have a program that uses https://github.com/creack/pty to create pseudo terminals. Given a command, it creates a file object where you can read and write and it will work as stdin and stdout.
I use a WebSocket to read & write commands to the program. I have configured it to run from the home directory of root user as well as current user based on my selection.
Initialise
cmd := exec.Command("/bin/sh", "env")
Set Command Execution path
cmd.Dir = "/var/root" // for system user
// (or)
cmd.Dir = "/Users/user_name" // for current user
Start the command with a pty.
ptmx, err := pty.Start(cmd) // ptmx is of type *os.File
This works fine but when I try to print environment variables, it will not show all respective environment variables for that particular user or root user.
Is there any way to get environment variables from pseudo-terminal for root user/current user?
The reason you're not getting the expected output is because env is not a shell script. At an interactive prompt try running the command your Go program is running. Here is what I see:
> /bin/sh env
/usr/bin/env: /usr/bin/env: cannot execute binary file
Try running /bin/sh -c env instead. However, since your command does not contain any shell syntax and does nothing more than execute the env binary you don't need /bin/sh at all. Just exec env.
Also, you seem to be under some misconceptions. There is no such thing as "user environment variables". Similarly, a pty does not have env vars. Environment variables are private to each process (and a pty is not a process).
It is true that interactive shells may set env vars by automatically reading various shell config files such as ~/.bashrc before showing the first prompt but I don't think that's what you're referring to since your /bin/sh -c env won't start an interactive shell; even though its stdin and stdout are attached to a pty.
Related
Here is the case:
I am writing a go program.
At some point, the program calls terragrunt cli, via os.Exec().
The program is run on a machine having systemd version 232.
Up till know, I have been invoking terragrunt with some env vars exposed (required by terragrunt as we will see below)
These env vars are passed to the login process by /etc/profile.d/terragruntvars as in
export TF_VAR_remote_state_bucket=my-bucket-name
So when I run in my terminal say terragrunt plan and by the appropriate interpolation in my tf / hcl files, I get something like (this is a debug level output, showing the actual terraform invocation terragrunt ends up performing)
terraform init -backend-config=my-bucket-name ...(more flags following)
My go program (invoking terragrunt cli via os.Exec()) runs perfectly via go run main.go
I decide to make this a systemd service as in
[Service]
ExecStart=/bin/sh -c myprogram
EnvironmentFile=/etc/myprogram/config
User=someuser
Group=somegroup
[Install]
WantedBy=multi-user.target
The program started failing miserably. By searching the root case I found out that the TF_VAR_* variables where never passed to the service when running, so the terraform command ended up being like
terraform init -backend-config=(this is empty, nothing here)
I thought that by explicitly invoking the service via bash, i.e. by making ExecStart=/bin/sh -c myprogram this would address the problem.
Here come the weird(est) parts.
Adding these vars to EnvironmentFile=/etc/myprogram/config did not have any effect in the terragrunt execution. When I say no effect, I mean the variables did become available to the service, however the command is still broken, i.e.
terraform init -backend-config=(this is empty, nothing here)
However, the TF_VAR_* variables ARE there. I added an os.Exec("env") in my program and it did print them.
This has been driving me nuts so any hint about what might be causing this would be highly appreciated.
Just like a shell will not pass it's process ENV VAR's on to child processes:
$ X=abc
$ bash -c 'echo $X' # prints nothing
unless you export the environment variable:
$ export X
$ bash -c 'echo $X' # abc
similarly with systemd and when using EnvironmentFile, to export environment variables, use PassEnvironment e.g.
PassEnvironment=VAR1 VAR2 VAR3
From the docs:
PassEnvironment=
Pass environment variables set for the system service manager to executed processes.
Takes a space-separated list of variable names...
I'm trying to create a system-wide environmental variable TEST_ENV_ONE.
I want to use it right after executing makefile without logout and after rebooting. So I'm trying to repeat manual moves like export variable and write it ti /etc/environment
I wrote a makefile like this, but it doesn't work:
var_value := some_string
TEST_ENV_ONE := $(var_value)
vars:
$(shell export TEST_ENV_ONE=$(var_value))
grep 'TEST_ENV_ONE=' /etc/environment || "TEST_ENV_ONE=\"$(var_value)\"" | sudo tee -a /etc/environment > /dev/null
What you want to do is basically impossible on a POSIX system as you've stated it. The environment of a process is inherited from its parent (the process that started it) and once a process is running, its environment cannot ever be changed externally. That includes by its children, or by modifying some other file.
You can, by modifying /etc/environment, change the environment for new logins but this will not change the environment of any existing shell or its child.
That being said, your makefile also has a number of problems:
$(shell export TEST_ENV_ONE=$(var_value))
This is doubly-not right. First, it's an anti-pattern to use the make $(shell ...) function inside a recipe script. Recipes are already shell scripts so it's useless (and can lead to unexpected behavior) to use $(shell ...) with them.
Second, this is a no-op: what this does is start a shell, tell the shell to set an environment variable and export it, then the shell exits. When the shell exits, all the changes to its environment are lost (obviously, because it exited!) So this does nothing.
Next:
grep 'TEST_ENV_ONE=' /etc/environment || "TEST_ENV_ONE=\"$(var_value)\"" | sudo tee -a /etc/environment > /dev/null
This does nothing because the statement "TEST_ENV_ONE=\"$(var_value)\"" sets an environment variable but generates no output, so there's no input to the sudo tee command and nothing happens. I expect you forgot an echo command here:
grep 'TEST_ENV_ONE=' /etc/environment || echo TEST_ENV_ONE=\"$(var_value)\" | sudo tee -a /etc/environment > /dev/null
However as I mention above, modifying /etc/environment will only take effect for new logins to the system, it won't modify any existing login or shell.
We have a script that is executed by httpd as the default ec2-user. However when executed the script does not see any of the environmental variables for that user
the variable is set under user ec2-user
myUseVarHome=/home/ec2-user
myScript.sh
#!/bin/bash
myFolder="${myUseVarHome}/test/www"
USER=$(whoami)
echo "Content-type: text/html"
echo ""
echo "hello $USER"
echo "myFolder=$myFolder"
executing script as ec2-user outputs
hello ec2-user
myFolder=/home/ec2-user/test/www
We then set httpd 2.4 conf
<IfModule unixd_module>
User ec2-user
Group ec2-user
</IfModule>
now call the script with
wget 127.0.0.1/myScript.sh
outputs
hello ec2-user
myFolder=/test/www
The output validates the httpd user is ec2-user, same as manually executing the script, however the env variable ${myUseVarHome} is blank or does not exist.
Is this expected behaviour or do we need to call the env variable another way when executed as httpd user?
bash acts differently whether it is a shell or a normal progamming language (like perl or python).
By designed, those settings in ~/.bash_profile, ~/.bashrc, etc. are for users to set things when bash plays the roll of a shell (login shell, interractive shell). Think about environment you have in a xterm (interractive shell) or in ssh sessions (login shell) or in consoles (login shell).
In other hand, bash is also a powerfull progamming language --think about many scripts for managing services in systemd-- which requires a different style of working. Example, when a developer write a system script or a bash program, he/she will not likely to source user defined ~/.bash_profile automatically. It is a normal program, not a shell. A normal program (including bash programs) would naturally inherrit setting in a current working evironement (shell), but not set them.
If we write a program for cron in bash --it is just happenly it is written in bash; in fact, we can write it in python or perl or any other progamming language-- then, we can have an option to sources bash's ~/.bash_profile (read: setting of user's shell, which happenly to be the same language of your programming language):
[ -f /home/user/.bash_profile ] && . /home/user/.bash_profile
However, what if that particular user do not use bash as his/her shell? He/she may use zsh, 'ksh,fish`, etc. So, that's practice does not really work when writing program for public use.
So, you can source ~/.bash_profile if you think it work. But, here, it is not about whether we are able to source a file, it is about how things should works in the system: the design concept. In short: we should view bash as something having 2 rolls: shell and progamming language. Then everything will be clear, easier to understand.
See: How to change cron shell sh to bash
I'm using this bash script:
for a in `sort -u $HADOOP_HOME/conf/slaves`; do
rsync -e ssh -a "${HADOOP_HOME}/conf" ${a}:"${HADOOP_HOME}"
done
for a in `sort -u $HBASE_HOME/conf/regionservers`; do
rsync -e ssh -a "${HBASE_HOME}/conf" ${a}:"${HBASE_HOME}"
done
When I call this script directly from shell, there are no problems and it works fine. But when I call this script from another script, although the script does its job, I get this message at the end:
sort: open failed: /conf/slaves: No such file or directory
sort: open failed: /conf/regionservers: No such file or directory
I have set $HADOOP_HOME and $HBASE_HOME in /etc/profile and the script does the job right. But I don't understand why it gives this message in the end.
Are you sure it's doing it right? When you call this script from the shell it is acting as an interactive shell which reads and sources /etc/profile and ~/.bash_profile if it exists. When you call it from another script it is running as non-interactive and wont source those files. If you want a non-interactive shell to source a file you can do this by setting the BASH_ENV environment variable.
#!/bin/bash
export BASH_ENV=/etc/profile
./call/to/your/HADOOP/script.sh
Everything points to those variables not being defined when your script runs.
You should ensure that they are set for your script. Before the first loop, place the line:
echo "[${HADOOP_HOME}] [${HBASE_HOME}]"
and make sure that doesn't output "[] []" (or even one "[]").
Additionally, put a set +x line at the top of the script - this will output lines before executing them and you can see what's being done.
Keep in mind that some shells don't pass on environment variables to subshells unless you explicitly export them (setting them is not enough).
Is it possible to source a .bshrc file from .cshrc in a non-interactive session?
I'm asking because tcsh is our default shell at work and the .cshrc has to be used to set up the environment initially.
However, I am not really familiar with the tcsh and I have my own set-up in bash, so right now I have the following lines at the end of my .cshrc file:
if ( $?prompt && -x /bin/bash) then
exec /bin/bash
endif
This works fine, loading my environment from .bashrc and giving me a bash prompt for interactive sessions but now I also need the same set-up for non-interactive sessions, e.g. to run a command remotely via SSH with all the correct PATHs etc.
I can't use 'exec' in that case but I can't figure out how to switch to bash and load the bash config files "non-interactively".
All our machines share the same home directory, so any changes to my local *rc files will affect the remote machiens as well.
Any ideas welcome - thank you for your help!
After some more research I'm now quite sure that this won't work, but of course feel free to prove me wrong!
To load the environment in bash I have to switch to a bash shell. Even if that is possible "in the background", i.e. without getting a prompt, it would still break any tcsh commands which would then be attempted to execute under bash.
Hmmmm, back to the drawing board...
If $command is set there are arguments to csh, so it is a remote shell command. This works for me in .cshrc:
if ($?command) then
echo Executing non-interactive command in bash: $command $*
exec /bin/bash -c "${command} $*"
endif
echo Interactive bash shell
exec bash -l
Test:
$ ssh remotehost set | grep BASH
BASH=/bin/bash
...
proves that it ran in Bash.