How to get environment variables in live Heroku dyno - bash

There is a heroku config command but that apparently just shows me what the current setting is. I want to confirm in a dyno what my application is actually seeing in the running environment.
I tried heroku ps:exec -a <app> -d <dyno_instance> --ssh envand this has some generic output (like SHELL, PATH, etc.) output but it doesn't show any env vars that I've configured (like my db strings, for example). I've also tried directly logging in (using bash instead of the env command) and poked around but couldn't find anything.

Try heroku run env instead.
According to the documentation:
"The SSH session created by Heroku Exec will not have the config vars set as environment variables (i.e., env in a session will not list config vars set by heroku config:set)."

heroku run bash does the similar to heroku ps:exec but has the config vars available.

The accepted answer is okay in most cases. heroku run will start a new dyno however, so it won't be enough you need to check the actual environment of an running dyno (let's say, purely hypothetically, that Heroku has an outage and can't start new dynos).
Here's one way to check the environment of a running dyno:
Connect to the dyno: heroku ps:exec --dyno <dyno name> --app <app name>
For example: heroku ps:exec --dyno web.1 --app my-app
Get the pid of your server process (check your Procfile if you don't know). Let's say you're using puma:
ps aux | grep puma
The output might look something like this:
u35949 4 2.9 0.3 673980 225384 ? Sl 18:20 0:24 puma 3.12.6 (tcp://0.0.0.0:29326) [app]
u35949 31 0.0 0.0 21476 2280 ? S 18:20 0:00 bash --login -c bundle exec puma -C config/puma.rb
u35949 126 0.1 0.3 1628536 229908 ? Sl 18:23 0:00 puma: cluster worker 0: 4 [app]
u35949 131 0.3 0.3 1628536 244664 ? Sl 18:23 0:02 puma: cluster worker 1: 4 [app]
u35949 196 0.0 0.0 14432 1044 pts/0 S+ 18:34 0:00 grep puma
Pick the first one (4, the first number in the second column, in this example)
Now, you can get the environment of that process. Replace <PID> by the process id you just got, for example 4:
cat /proc/<PID>/environ | tr '\0' '\n'
HEROKU_APP_NAME=my-app
DYNO=web.1
PWD=/app
RACK_ENV=production
DATABASE_URL=postgres://...
...
The tr is there to make it easier to read, since the contents of /proc/<pid>/environ is zero-delimited.

If your Heroku stack supports Node.js, then you can run a Node.js process on your Heroku app and print all (and not just the ones, that you configured) environment variables.
Commands:
heroku run node --app your-heroku-app-name
console.log(process.env)

Related

Heroku PS:Exec with ENV Vars

I see this in the Heroku docs:
The SSH session created by Heroku Exec will not have the config vars set as environment variables (i.e., env in a session will not list config vars set by heroku config:set).
I need to be able to SSH into our sidekiq container specifically and run a console session there. To do this, I need access to the ENV vars. I cannot do this in a one off bash container, because the config is different for sidekiq container, and I need to confirm that values are getting set properly (via the console).
Something like this:
heroku ps:exec -a [our-app] -d [sidekiq.1] --with-env-vars
How can I use heroku ps:exec (or a similar command) to ssh into an existing dyno WITH config vars present?
No the most ideal, but there is an option which is helpful for me.
Identify the command call
This is to identify the potential process that will contain the environment variables.
ps auxfww that will give you a result similar to:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
nobody 1 0.0 0.0 6092 3328 ? Ss 18:19 0:00 ps-run
u19585 4 0.5 0.6 553984 410984 ? Sl 18:19 0:19 puma 4.3.12 (tcp://0.0.0.0:36152) [app]
u19585 26 0.0 0.0 9836 2248 ? S 18:19 0:00 \_ bash --login -c bundle exec puma -p 36152 -C ./config/puma.rb
In this case bash --login -c bundle exec puma will be our ENV Process picker.
Load your ENV variables
Then run the source command to call the ENVS each time you get connected through ps:exec
source <(cat /proc/$(pgrep -f "bash --login -c bundle exec puma")/environ | strings)
source <(<DATA>): export your variables in a call
pgrep -f "<IDENTIFIED_COMMAND>": picks the PID
cat /proc/<PID>/environ: contains the app env variables
strings: to convert the 'binary' lines to string lines
After that, you'll have your main ENV variables available in your console.
Finally copy in a good place your source call to use it when you need it.

How to print environment variables on Heroku?

I'd like to print all environment variables set on my Heroku server. How can I do that with command line only?
Ok, I found the way:
heroku config
The heroku run command runs a one-off process inside a Heroku dyno. The unix command that prints environment variables is printenv (manual page). Thus
heroku run -a app-name printenv
is the command you are looking for.
step 1 : list your apps
heroku apps
Copy the name of your app
step 2 : view config variables of this app
heroku config -a acme-web
Append --json to get the output as JSON.
heroku config -a acme-web --json
Append -s to get the output in shell format, to paste directly to a .env file, for example.
heroku config -a your-app -s

Sidekiq Broken Pipe Error

I am attempting to migrate from Heroku to AWS, but my Sidekiq jobs keep failing with the following error:
Errno::EPIPE: Broken pipe # io_write - <STDOUT>
I can successfully run jobs from the console using perform_now, and everything works just fine in Heroku, so I am presuming the issue lies somewhere with my AWS setup. I have seen references to improper daemonization around Stack Overflow and Github but not sure how to solve the problem.
Right now I am launching my processes with the following command:
foreman start -f Procfile -p 3000 -e $VAR_FILES &
and I have tried the command both with and without the & at the end.
My Procfile looks like this:
web: bundle exec puma -t 1:2 -p ${PORT:-3000} -e ${RACK_ENV:-production}
worker: bundle exec sidekiq -C config/sidekiq.yml
log: tail -f log/production.log
and I have also tried it like this, following the instructions here (https://github.com/mperham/sidekiq/wiki/Logging#syslog):
worker: bundle exec sidekiq -C config/sidekiq.yml 2>&1 | logger -t sidekiq
My sidekiq.yml has logfile set to ./log/sidekiq.log, which I believe is supposed to redirecting logs away from STDOUT anyway.
I have seen the discussion here (https://github.com/mperham/sidekiq/issues/3188) and can verify that the rails12factor gem is not in my Gemfile.
But still the error persists... Can anyone lend a hand?
UPDATE: I can finally get a stack trace and see it is coming from a puts statement inside of the Neo4j.rb gem:
2017-04-07T15:46:53.553Z 697 TID-12a6r4 WARN: Errno::EPIPE: Broken pipe # io_write - <STDOUT>
2017-04-07T15:46:53.553Z 697 TID-12a6r4 WARN: /var/lib/gems/2.3.0/bundler/gems/neo4j-c804cb33bef8/lib/neo4j/session_manager.rb:60:in `write'
/var/lib/gems/2.3.0/bundler/gems/neo4j-c804cb33bef8/lib/neo4j/session_manager.rb:60:in `puts'
/var/lib/gems/2.3.0/bundler/gems/neo4j-c804cb33bef8/lib/neo4j/session_manager.rb:60:in `puts'
But still not sure how I can mitigate the issue. I have tried with RAILS_LOG_TO_STDOUT=enabled both set and unset.
I spoke to the gem maintainers and they removed the puts statements in v 8.0.13. It fixed the problem for me!

Kubernetes - kubectl exec bash - session drop and line width

I'm having k8s cluster with 3 minions, master and haproxy in front. When I use
kubectl exec -p $POD -i -t -- bash -il
for accessing bash in the pod (it is a single container in this case) I get in and after something like 5 mins I get dropped out of the terminal. If I reenter the container I can see my old bash process running, with a new started for my new connection. Is there a way to prevent this from happening? When I'm using docker exec it works fine and doesn't drop me so I guess it is from kubernetes.
As a bonus question - is there a way to increase the characters per line when using kubectl exec? I get truncated output that is different from docker exec.
Thanks in advance!
It is a known issue -
https://github.com/kubernetes/kubernetes/issues/9180
The kubelet webserver times out.
i have resolve by add env COLUMNS=$COLUMNS LINES=$LINES before bash kubectl exec -ti busybox env COLUMNS=$COLUMNS LINES=$LINES bash

Postgres is failing with 'could not open relation mapping file "global/pg_filenode.map" '

I'm having an issue with my install of postgres in my development environment and I need some help diagnosing it. I haven't yet had any luck in tracking down a solution.
I have postgres 9.0.4 installed with homebrew
I am running on OS X 10.6.8 (Snow Leopard)
I can start and stop the server
$ pg_ctl -D /usr/local/var/postgres -l /usr/local/var/postgres/server.log start
server starting
If I try to stop though
$ pg_ctl -D /usr/local/var/postgres stop -s -m fast
pg_ctl: PID file "/usr/local/var/postgres/postmaster.pid" does not exist
Is server running?
Ok this is missing
$ ls -l /usr/local/var/postgres/ | grep postmaster
$
But it is definitely running
$ ps aux | grep postgres
pschmitz 303 0.9 0.0 2445860 1428 ?? Ss 3:12PM 0:02.46 postgres: autovacuum launcher process
pschmitz 304 0.9 0.0 2441760 428 ?? Ss 3:12PM 0:02.57 postgres: stats collector process
pschmitz 302 0.0 0.0 2445728 508 ?? Ss 3:12PM 0:00.56 postgres: wal writer process
pschmitz 301 0.0 0.0 2445728 560 ?? Ss 3:12PM 0:00.78 postgres: writer process
pschmitz 227 0.0 0.1 2445728 2432 ?? S 3:11PM 0:00.42 /usr/local/Cellar/postgresql/9.0.3/bin/postgres -D /usr/local/var/postgres -r /usr/local/var/postgres/server.log
And if I try to access or use it I get this.
$psql
psql: FATAL: could not open relation mapping file "global/pg_filenode.map": No such file or directory
But global/pg_filenode.map definitely exists in
$ls -l /usr/local/var/postgres/
...
-rw------- 1 pschmitz staff 8192 Sep 16 15:48 pg_control
-rw------- 1 pschmitz staff 512 Sep 16 15:48 pg_filenode.map
-rw------- 1 pschmitz staff 12092 Sep 16 15:48 pg_internal.init
I have attempted to uninstall and reinstall to no effect. Any ideas on how I can solve this?
It has pretty much prevented me from getting anything done today.
I am not sure what the source of my original problem was with 9.0.3 because I was getting this problem:
psql: FATAL: could not open relation mapping file "global/pg_filenode.map": No such file or directory
However as stated above it turns out that the running process was for my previous postgres install of 9.0.3
I believe I had an old version org.postgresql.postgres.plist in ~/Library/LaunchAgents/
I had to:
Remove and re-add the launch agent
Kill the processes for 9.0.3
Initialize the db initdb /usr/local/var/postgres
Restart my computer
and now I have it up and working.
Encountered this problem using mdillon/postgis:9.6 Docker image. Simple sudo docker restart <container id> solved the problem.
That may be a permission issue, check the owner and group of configuration files in /var/lib/pgsql/9.3/data/
chown -R postgres:postgres /var/lib/pgsql/9.3/data/
I just encountered this problem. Solved it by setting the owner of the postgres data directory to the unprivileged postgres user.
ps aux | grep postgres revealed I had another instance of postgres running on a temp data directory from a previous test run. Killing this process fixed the problem.
My step-by-step solution in fedora:
/bin/systemctl stop postgresql.service (Stop the service)
rm -rf /var/lib/pgsql/data (Remove the "data" direcotry)
postgresql-setup initdb (Recreate the "data" directory)
/bin/systemctl start postgresql.service (Start the service)
It is also useful to check the permissions of the "data" directory:
chown -R postgres:postgres <path_to_data_dir>
(Kudos to #LuizFernandodaSilva & #user4640867)
I had an old value of PGDATA confusing things.
This (https://gist.github.com/olivierlacan/e1bf5c34bc9f82e06bc0) solved my problem! I first had to:
Delete Postgres.app from my Applications
Delete /usr/local/var/postgres directory
initdb /usr/local/var/postgres/
Then I was able to start/stop Postgres with these 2 commands:
Start:
pg_ctl -D /usr/local/var/postgres -l /usr/local/var/postgres/server.log start
Stop:
pg_ctl -D /usr/local/var/postgres stop -s -m fast
My solution to this problem:
I am running postgresql-9.3
My plist file is in the following directory:/Library/LaunchDaemons/com.edb.launchd.postgresql-9.3.plist
Step 1 will stop postgres
1. $ sudo launchctl stop com.edb.launchd.postgresql-9.3
Start postgres using the following command (can find this location using $ brew info postgres)
2. $ postgres -D /usr/local/var/postgres
I agree about all of the above solutions. I was running Postgres on a server, and the problem was that I was using a PORT number that was used by some other OLDER version of Postgres.
I only needed to change the port.
I had the same error psql: FATAL: could not open relation mapping file "global/pg_filenode.map": No such file or directory.
Thanks for note #2 above: 'Kill the processes for 9.0.3'
I previously configured and compiled PostgreSQL. I then decided to reconfigure, gmake, gmake install with different file paths. The newly compiled program wasn't finding 'pg_filenode.map' in the expected filepath. Killing the running postgres process, emptying pgsql/data, and doing initdb again allowed creation of a new database.
Make sure to turn off you antivirus.
In my case when i turn off the antivirus(kaspersky) it worked fine
Ref : https://github.com/PostgresApp/PostgresApp/issues/610

Resources