Heroku PS:Exec with ENV Vars - heroku

I see this in the Heroku docs:
The SSH session created by Heroku Exec will not have the config vars set as environment variables (i.e., env in a session will not list config vars set by heroku config:set).
I need to be able to SSH into our sidekiq container specifically and run a console session there. To do this, I need access to the ENV vars. I cannot do this in a one off bash container, because the config is different for sidekiq container, and I need to confirm that values are getting set properly (via the console).
Something like this:
heroku ps:exec -a [our-app] -d [sidekiq.1] --with-env-vars
How can I use heroku ps:exec (or a similar command) to ssh into an existing dyno WITH config vars present?

No the most ideal, but there is an option which is helpful for me.
Identify the command call
This is to identify the potential process that will contain the environment variables.
ps auxfww that will give you a result similar to:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
nobody 1 0.0 0.0 6092 3328 ? Ss 18:19 0:00 ps-run
u19585 4 0.5 0.6 553984 410984 ? Sl 18:19 0:19 puma 4.3.12 (tcp://0.0.0.0:36152) [app]
u19585 26 0.0 0.0 9836 2248 ? S 18:19 0:00 \_ bash --login -c bundle exec puma -p 36152 -C ./config/puma.rb
In this case bash --login -c bundle exec puma will be our ENV Process picker.
Load your ENV variables
Then run the source command to call the ENVS each time you get connected through ps:exec
source <(cat /proc/$(pgrep -f "bash --login -c bundle exec puma")/environ | strings)
source <(<DATA>): export your variables in a call
pgrep -f "<IDENTIFIED_COMMAND>": picks the PID
cat /proc/<PID>/environ: contains the app env variables
strings: to convert the 'binary' lines to string lines
After that, you'll have your main ENV variables available in your console.
Finally copy in a good place your source call to use it when you need it.

Related

Environment variables not showing up on mac

Environment variables set outside the shell, through ~/.bash_profile or ~/.bashrc do not appear to docker, or env, despite being accessible in the shell.
Bash_profile contains the line TEST_ENV_VAR=123, after restarting terminal, the variable can be accessed through $TEST_ENV_VAR, however docker and env cannot access this environment variable.
Henrys-MacBook-Pro:~ henry$ echo $TEST_ENV_VAR
123
Henrys-MacBook-Pro:~ henry$ docker run -it -e TEST_ENV_VAR ubuntu env | grep TEST_ENV_VAR
Henrys-MacBook-Pro:~ henry$ env | grep TEST_ENV_VAR
And yet the terminal can access it, and even pass it into docker:
Henrys-MacBook-Pro:~ henry$ docker run -it -e TEST_ENV_VAR=${TEST_ENV_VAR} ubuntu env | grep TEST_ENV_VAR
TEST_ENV_VAR=123
And the issue isn't an issue with environment variable in general, as variables set in the terminal work as expected:
Henrys-MacBook-Pro:~ henry$ export TEST_ENV_VAR=1234
Henrys-MacBook-Pro:~ henry$ docker run -it -e TEST_ENV_VAR ubuntu env | grep TEST_ENV_VAR
TEST_ENV_VAR=1234
I'm running macOS Mojave, 10.14.5, classic terminal, docker 19.03.4, the output of ls -al ~:
-rw-r--r-- 1 henry staff 455 Nov 12 11:50 .bash_profile
docker doesn't actually start a container. Instead, it sends a message to the Docker engine (running in a separate environment unrelated to the environment in which docker is executed) requesting that it start a container for you. As such, the new container won't inherit any of variables in your current shell environment.
With your TEST_ENV_VAR=${TEST_ENV_VAR} attempt, you are explicitly telling docker to create an environment variable named TEST_ENV_VAR, with the value produced by expanding TEST_ENV_VAR now, in the new container, rather than trying to inherit the variable from the appropriate environment.
Even ignoring that, you aren't actually creating an environment variable with TEST_ENV_VAR=123; you've only created an ordinary shell variable. For env to see it, you need to first export it.
$ TEST_ENV_VAR=123
$ env | grep TEST_ENV_VAR
$ export TEST_ENV_VAR
$ env | grep TEST_ENV_VAR
TEST_ENV_VAR=123
The .bash_profile is for your user. Docker is running in its own environment (as opposed to running in a subshell spawned by your user). Export only sends information down into child shells. You need to make the variable available at a higher level.
/etc/environment
Be careful in there.
Alternatively, you may look into asking Docker to make these changes itself.

How to get environment variables in live Heroku dyno

There is a heroku config command but that apparently just shows me what the current setting is. I want to confirm in a dyno what my application is actually seeing in the running environment.
I tried heroku ps:exec -a <app> -d <dyno_instance> --ssh envand this has some generic output (like SHELL, PATH, etc.) output but it doesn't show any env vars that I've configured (like my db strings, for example). I've also tried directly logging in (using bash instead of the env command) and poked around but couldn't find anything.
Try heroku run env instead.
According to the documentation:
"The SSH session created by Heroku Exec will not have the config vars set as environment variables (i.e., env in a session will not list config vars set by heroku config:set)."
heroku run bash does the similar to heroku ps:exec but has the config vars available.
The accepted answer is okay in most cases. heroku run will start a new dyno however, so it won't be enough you need to check the actual environment of an running dyno (let's say, purely hypothetically, that Heroku has an outage and can't start new dynos).
Here's one way to check the environment of a running dyno:
Connect to the dyno: heroku ps:exec --dyno <dyno name> --app <app name>
For example: heroku ps:exec --dyno web.1 --app my-app
Get the pid of your server process (check your Procfile if you don't know). Let's say you're using puma:
ps aux | grep puma
The output might look something like this:
u35949 4 2.9 0.3 673980 225384 ? Sl 18:20 0:24 puma 3.12.6 (tcp://0.0.0.0:29326) [app]
u35949 31 0.0 0.0 21476 2280 ? S 18:20 0:00 bash --login -c bundle exec puma -C config/puma.rb
u35949 126 0.1 0.3 1628536 229908 ? Sl 18:23 0:00 puma: cluster worker 0: 4 [app]
u35949 131 0.3 0.3 1628536 244664 ? Sl 18:23 0:02 puma: cluster worker 1: 4 [app]
u35949 196 0.0 0.0 14432 1044 pts/0 S+ 18:34 0:00 grep puma
Pick the first one (4, the first number in the second column, in this example)
Now, you can get the environment of that process. Replace <PID> by the process id you just got, for example 4:
cat /proc/<PID>/environ | tr '\0' '\n'
HEROKU_APP_NAME=my-app
DYNO=web.1
PWD=/app
RACK_ENV=production
DATABASE_URL=postgres://...
...
The tr is there to make it easier to read, since the contents of /proc/<pid>/environ is zero-delimited.
If your Heroku stack supports Node.js, then you can run a Node.js process on your Heroku app and print all (and not just the ones, that you configured) environment variables.
Commands:
heroku run node --app your-heroku-app-name
console.log(process.env)

Script done, file is typescript in bash while creating a service

I have a script file as below.
#!/bin/bash
set -x
set -e
#VBoxManage startvm "cuckoo-window" --type gui
python ~/Downloads/cuckoo-modified-master/utils/api.py --host 0.0.0.0 --port 8090
#cd ~/Downloads/cuckoo-modified-master/web/
# python manage.py runserver 0.0.0.0:8008
# python ~/Downloads/cuckoo-modified-master/cuckoo.py
My service script /etc/init/miscservices.conf
start on runlevel
script
cd ~/Downloads/cuckoo-modified-master
./miscservices.sh
end script
I have also created a symlink in /etc/init.d/miscservices and added in startup
sudo update-rc.d miscservices defaults
sudo service miscservices start
miscservices stop/waiting
No script started. When I start as below, them move to root mode but still no service start. But when I exit it start 2 instances of the service.Please explain this behavior.
sudo /etc/init.d/miscservices start
start: Unknown job: on
Script started, file is typescript
root#abc:~# sudo netstat -ntlp | grep 8090
root#abc:~# ps -aux | grep misc
root 2929 0.0 0.0 81976 2260 pts/6 S+ 13:42 0:00 sudo /etc/init.d/miscservices start
root 2930 0.0 0.0 4440 652 pts/6 S+ 13:42 0:00 sh /etc/init.d/miscservices start
root 2962 0.0 0.0 16192 936 pts/15 S+ 13:43 0:00 grep --color=auto misc
root#abc:~#
root#abc:~#
root#abc:~# exit
exit
Script done, file is typescript
+ set -e
+ python /home/aserg/Downloads/cuckoo-modified-master/utils/api.py --host 0.0.0.0 --port 8090
Bottle v0.12.0 server starting up (using WSGIRefServer())...
Listening on http://0.0.0.0:8090/
Hit Ctrl-C to quit.
I think the problem may be in ~/Downloads/cuckoo-modified-master/miscservices.sh script (and btw use ~ in upstart script not best idea because it will probably execute as root and there can be some problems with determitate where is ~). By default upstart doesn't spawn processes. That mean if you have your configuration like that:
start on runlevel
script
cd ~/Downloads/cuckoo-modified-master
./miscservices.sh
end script
upstart will only start your process once and then do nothing. And if you start it once and there was some error in script it just stoped with no information to you. You can check log file. By default it must be in /etc/log/upstart/miscservices.log (for Ubuntu 14.04 LTS). If no you can check where is default upstart logs in your OS or write it manually just by echo some informtaion in certain place. For example:
env logf="/home/someuser/miscservices.log"
script
echo "Script it starting..." >> $logf
/home/someuser/Downloads/cuckoo-modified-master/miscservices.conf >> $logf
echo "Script is ended." >> $logf
end scipt
And if you want to respawn script processes you may add to the beggining of the script:
respawn
respawn limit unlimited
or just make unlimited loop in script file.
ps. You can modify your upstart script /etc/init/miscservices.conf by more clear code. From start on runlevel to:
start on filesystem
stop on shutdown
This mean that script will start after filesystem starts and will shutdown on shutdown.
pps. You dont need symlink in /etc/init.d/miscservices. If you use upstart then just use it! You dont need anything else. Just put your end scipt in /etc/init and it will start automatically. It can be something like that:
start on filesystem
stop on shutdown
env logf="/home/someuser/miscservices.log"
script
echo "Script it starting..." >> $logf
/home/someuser/Downloads/cuckoo-modified-master/miscservices.conf >> $logf
echo "Script is ended." >> $logf
end scipt
Hope i can help!

Automatically enter only running docker container

In the cloud, I have multiple instances, each running a container with a different random name, e.g.:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5dc97950d924 aws_beanstalk/my-app:latest "/bin/sh -c 'python 3 hours ago Up 3 hours 80/tcp, 5000/tcp, 8080/tcp jolly_galileo
To enter them, I type:
sudo docker exec -it jolly_galileo /bin/bash
Is there a command or can you write a bash script to automatically execute the exec to enter the correct container?
"the correct container"?
To determine what is the "correct" container, your bash script would still need either the id or the name of that container.
For example, I have a function in my .bashrc:
deb() { docker exec -u git -it $1 bash; }
That way, I would type:
deb jolly_galileo
(it uses the account git, but you don't have to)
Here's my final solution. It edits the instance's .bashrc if it hasn't been edited yet, prints out docker ps, defines the dock function, and enters the container. A user can then type "exit" if they want to access the raw instances, and "exit" again to quit ssh.
commands:
bashrc:
command: if ! grep -Fxq "sudo docker ps" /home/ec2-user/.bashrc; then echo -e "dock() { sudo docker exec -it $(sudo docker ps -lq) bash; } \nsudo docker ps\ndock" >> /home/ec2-user/.bashrc; fi
As VonC indicated, usually you have to make some shell scripting of your own if you find yourself doing something repetitive. I made a tool myself here which works if you have Bash 4+.
Install
wget -qO- https://raw.githubusercontent.com/Pithikos/dockerint/master/docker_autoenter >> ~/.bashrc
Then you can enter a container by simply typing the first letters of the container.
$> docker ps
CONTAINER ID IMAGE ..
807b1e7eab7e ubuntu ..
18e953015fa9 ubuntu ..
19bd96389d54 ubuntu ..
$> 18
root#18e953015fa9:/#
This works by taking advantage of the function command_not_found_handle introduced in Bash 4. If a command is not found, the script will try and see if what you typed is a container and if it is, it will run docker exec <container> bash.

How to use sudo in build script for gitlab ci?

When I would like to do something that requiers sudo privelegies, the build process stucks and when ps aux for that command, it hanging in the list but doing nothing.
E.g.:
in the buildscript:
# stop nginx
echo "INFO: stopping nginx. pid [$(cat /opt/nginx/logs/nginx.pid)]"
sudo kill $(cat /opt/nginx/logs/nginx.pid)
in the gitlab ci output console:
INFO: stopping nginx. pid [2741]
kill $(cat /opt/nginx/logs/nginx.pid) # with a spinning wheel
in the bash:
> ps aux | grep nginx
root 6698 0.0 0.1 37628 1264 ? Ss 19:25 0:00 nginx: master process /opt/nginx/sbin/nginx
nobody 6700 0.3 0.3 41776 3832 ? S 19:25 0:00 nginx: worker process
kai 7015 0.0 0.0 4176 580 pts/0 S+ 19:27 0:00 sh -c sudo kill $(cat /opt/nginx/logs/nginx.pid)
kai 7039 0.0 0.0 7828 844 pts/2 S+ 19:27 0:00 grep nginx
So:
not the sudo kill $(cat /opt/nginx/logs/nginx.pid) is going to execute, but sh -c sudo kill $(cat /opt/nginx/logs/nginx.pid)
it is hanging up, without response (sounds for me like it asks for a password interactively)
There are a couple of ways to resolve this.
Grant sudo permissions
You can grant sudo permissions to the gitlab-runner user as this is who is executing the build script.
$ sudo usermod -a -G sudo gitlab-runner
You now have to remove the password restriction for sudo for the gitlab-runner user.
Start the sudo editor with
$ sudo visudo
Now add the following to the bottom of the file
gitlab-runner ALL=(ALL) NOPASSWD: ALL
Do not do this for gitlab runners that can be executed by untrusted users.
SSH Runner
You can configure the gitlab-ci-runner to connect to a remote host using SSH. You configure this to use a user remotely that has sudo permissions, and perform the build using that user. The remote host can be the same machine that the gitlab runner is executing on, or it can be another host.
This build user account will still need to have sudo and passwordless permissions. Follow the instruction below, except replace gitlab-runner with the build user.
It worked for me as written by Reactgular.
But one little clarification. You must include a % sign before
gitlab-runner ALL = (ALL) NOPASSWD: ALL.
I could not understand for a long time why it doesn’t help me. Then I put the percentage icon and it worked.

Resources