I'm trying to create a Shell Script to automate my local dev environment. I need it start some processes (Redis, MongoDB, etc.), set the environment variables then start the local web server. I'm working on OS X El Capitan.
Everything is working so far, except the environment variables. Here is the script:
#!/bin/bash
# Starting the Redis Server
if pgrep "redis-server" > /dev/null
then
printf "Redis is already running.\n"
else
brew services start redis
fi
# Starting the Mongo Service
if pgrep "mongod" > /dev/null
then
printf "MongoDB is already running.\n"
else
brew services start mongodb
fi
# Starting the API Server
printf "\nStarting API Server...\n"
source path-to-file.env
pm2 start path-to-server.js --name="api" --watch --silent
# Starting the Auth Server
printf "\nStarting Auth Server...\n"
source path-to-file.env
pm2 start path-to-server.js --name="auth" --watch --silent
# Starting the Client Server
printf "\nStarting Local Client...\n"
source path-to-file.env
pm2 start path-to-server.js --name="client" --watch --silent
The .env file is using the format export VARIABLE="value"
The environment variables are just not being set at all. But, if I run the exact command source path-to-file.env before running the script then it works. I'm wondering why the command would work independently but not inside the shell script.
Any help would be appreciated.
When you execute a script, it executes in a subshell, and its environment settings are lost when the subshell exits. If you want to configure your interactive shell from a script, you must source the script in your interactive shell.
$ source start-local.sh
Now the environment should appear in your interactive shell. If you want that environment to be inherited by subshells, you must also export any variables that will be required. So, for instance, in path-to-file.env, you'd want lines like:
export MY_IMPORTANT_PATH_VAR="/example/blah"
Related
I want to export docker container hostname as an environment variable which I can later use in my app. In my docker file I call my script "run" as last command
CMD run
The run file is executable and works fine with rest of commands I perform but before them I want to export container hostname to an env. variable as follows
"run" File Try 1
#!/bin/bash
export DOCKER_MACHINE_IP=`hostname -i`
my_other_commands
exec tail -f /dev/null
But when I enter docker container and check, the variable is not set. If I use
echo $DOCKER_MACHINE_IP
in run file after exporting, it shows ip on console when I try
docker logs
I also tried sourcing another script from "run" file as follows
"run" File Try 2
#!/bin/bash
source ./bin/script
my_other_commands
exec tail -f /dev/null
and the script again contains the export command. But this also does not set the environment variable. What I am doing wrong?
When you execute a script, any environment variable set by that script will be lost when the script exits.
But for both the cases you've posted above the environment variable should be accessible for the commands in your scripts, but when you enter the docker container via docker run you will get a new shell, which does not contain your variable.
tl;dr Your exported environment variable will only be available to sub shells of the shell which set the variable. And if you need it when logging in you should source the ./bin/script file.
I'm trying to run a simple shell script to automate changing docker-machine environments. The problem is this, when I run the following command directly in the Mac terminal the following is outputted:
eval $(docker-machine env default)
docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default * digitalocean Running tcp://***.**.***.***:**** v1.12.0
So basically what you would expect, however when I run the following .sh script:
#!/usr/bin/env bash
eval $(docker-machine env default)
The output is:
./run.sh
docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default digitalocean Running tcp://***.**.***.***:**** v1.12.0
So basically, it is not setting it as active and I cannot access it.
Has anyone run into this issue before and knows how to solve it? Seems really strange to me, have got pretty much everything else running and automated apart from this facet.
Cheers, Aaron
I think you need to source your shell script
source ./myscript.sh
as the exports in the eval are being returned to the process you started to run the shell in and then being disposed of. These need to go to the parent e.g. login shell
Consider a.sh
#!/bin/bash
eval $(echo 'export a=123')
export b=234
when run in two ways
$ ./a.sh
$ echo $a
$ echo $b
$ source a.sh
$ echo $a
123
$ echo $b
234
$
So I have a script that looks like this:
#!/bin/bash
if [ $1 ]; then
docker-machine start $1
docker-machine env $1
eval $(docker-machine env $1)
docker ps -a
fi
Once it has run though, the scope of these commands seem to be over. For instance I don't have a connection to the docker-machine once the script has run, but I'd like to script this part out so I can have access to it.
For instance, after running this script ("./script.sh") I still can't run "docker ps -a".
What's the reason this happens and how could I get it to effectively be connected to after executing this script?
A script (or any other process) cannot modify the environment of its parent process. That is precisely why docker-machine env emits shell code that needs to be evaluated with eval.
If you want these variables accessible outside of your script, you would need to arrange to run eval $(docker-machine env <whatever>) in your current shell.
With regard to these posts:
Envs with supervisor, gunicorn & django,
How to guarantee availability of $BASH_ENV,
I'm trying to figure out why supervisor won't read my $BASH_ENV settings.
I am currently using a setup over which I load a gunicorn start up script through supervisor's config file.
I've set the supervisor's settings like this:
# /etc/supervisor/conf.d/test_project.conf
[program:test_project]
command=/home/konos5/gunicorn_start.sh
user=konos5
...
and the gunicorn script is this:
# /home/konos5/gunicorn_start.sh
#!/bin/bash
...
echo $TEST
...
So far so good. Both gunicorn and supervisor run fine. The problem is that $TEST comes out empty. Since supervisor loads the gunicorn script from a non-login, non-interactive shell it should source the file specified in $BASH_ENV.
Therefore I do this
$~ echo 'export TEST="HELLO WORLD"' > ~/my_custom_var
$~ export BASH_ENV=~/my_custom_var
The I reread and update supervisor and start again my project. However TEST still comes out empty. How is this possible since $BASH_ENV was supposed to be sourced?
Thank you in advance.
I'm trying to execute a bootup script (to start a Thin server) on a server using an interface called Virtualmin. I'm able to execute the commands with no problem using bash via PuTTY. I have to use Virtualmin, though, in order to have the commands execute on bootup, and I was having problems that I think were the result of Virtualmin not having my environmental variables available to it. Virtualmin uses Bourne shell, and I'm trying to set GEM_HOME and it's not working.
The error I'm getting is as follows:
/sbin/sh: GEM_HOME=/users/home/dquirk/gems: not found
Here are the commands I'm attempting to send . . . I'm thinking there's something wrong with the notation I'm using to try to set GEM_HOME:
GEM_HOME=/users/home/dquirk/gems
export GEM_HOME
/users/home/dquirk/gems/bin/thin start -c /users/home/dquirk/domains/quirkeweb.net/rails/clee -p 10671 -d -e production -a 127.0.0.1 -P /users/home/dquirk/var/run/thin-10671.pid
Figured it out . . . this is on a shared server, and the Virtualmin interface for adding bootup action commands is in the context of a Bourne shell where the user cannot change any environment variables. I created a bash script that changed the needed variables and executed the Thin start command and then put a bash command to load that script as the bootup action command and that worked.