I use Heroku and want to see logs in the JST timezone, not the UTC.
So I changed the TZ variable on Heroku CLI and checked the JST works by running date command on Heroku bash, but Heroku logs seem to be shown in the UTC.
Are there any solutions to fix this?
I assume by changing the TZ environment, you mean setting TZ on the Dyno. This won't affect logs, as your logs could have many sources (Postgres, Redis etc...). Generally, it's always best to keep all of your services running in UTC, as this will keep debugging across systems easier. If you need to convert your logs to JST for analysis, I recommend setting up a logging add-on like papertrail, this will allow you to view logs in your preferred timezone, while keeping the main log stream in UTC.
Related
If I update config vars changes are not populated to website. I need to redeploy the application to see changes.
The interesting part is it worked without redeploying before and we have not changed anything since that time.
Here's example of updating from cmd
Are there options that define that behavior?
Config variable changes lead to dyno restarts, always.
Only thing I can think of is preboot, which can lead to some minutes of time between the config-change and the change actually being visible in the web-application.
What is the difference between heroku ps:exec and heroku run bash? I am just trying to understand the concept. Both seem to be establishing an SSH-tunnel to a remote container/dyno. So why does heroku ps:exec require a dyno-restart on the first use? It seems this command is more generic (since it uses a default shell), so what needs to be configured/installed for it?
heroku run bash creates a standalone (ie not associated with any particular process) that has your application code available and gives you a bash session. This is helpful for running one-off tasks like a database migration it can also be helpful to debug issues where you need to look at the filesystem.
heroku ps:exec tunnels to a dyno that is already running as part of your formation. For instance, if you had 5 web dynos you could tunnel directly to web.3 for instance. This is useful in situations where a dyno is exhibiting issues (memory pressure or high load for example). Being able to connect to the problematic dyno is very useful for debugging.
You should also note that your config vars (ie environment vars set on the heroku settings tab) are not set in heroku ps:exec session.
I can't say for certain why a restart is required but I imagine that some configuration needs to change to enable a connection to a dyno already running in the fleet.
Is there a way to change the settings within Logentries on Heroku to sort the logs based upon the embedded log timestamp vs the timestamp that Heroku displays?
Currently having issues with our logs being completely out of order when we are letting the logs sort by Heroku timestamp. There seems to be a pretty big lack of options from what I can see in the portal.
we have jelastic, and even with VDS, all changes made in date are not applied to my environments.
how can I change date/time in jelastic?
thanks.
The VDS is a standalone system. The other servers in your environment are separate.
If you want to change the timezone for your other servers, you should ask your hosting provider to do it. However, in most cases you can also specify a timezone at application level (e.g. How to set a JVM TimeZone Properly Managing timezones etc.), so system time would only be relevant for log files and cron.
With Jelastic, you can use the add-on "TimeZone Change" from the MarketPlace.
Note : on some docker containers it will not work
See official documentation : https://docs.jelastic.com/timezone-management
What is the recommended way to upgrade a Heroku Postgres production database to 9.2 with minimal downtime? Is it possible to use a follower, or should we take the pgbackups/snapshots route?
Until logical followers in 9.4, you'll have to dump and restore (for the reasons Craig describes). You can simplify this with pgbackups:transfer. The direct transfer is faster than dump and restore, but know that you won't have a snapshot to keep.
The script below is basically Heroku's Using PG Backups to Upgrade Heroku Postgres Databases
with modification for pgbackups:transfer. (If you have multiple instances, say a staging server, add "-a" or "--remote" to each Heroku line to specify which server.)
# get the pgbackups plugin
heroku plugins:install git://github.com/heroku/heroku-pg-extras.git
# provision new db
heroku addons:add heroku-postgresql:crane --version=9.2
# wait for it to come online, make note of new color
heroku pg:wait
# prevent new data from arriving during dump
heroku ps:scale worker=0 web=0
heroku maintenance:on
# copy over the DB. could take a while.
heroku pgbackups:transfer OLDCOLOR NEWCOLOR
# promote new database as default for DATABASE_URL
heroku pg:promote NEWCOLOR
# start everything back up and test
heroku ps:scale worker=N web=N
heroku maintenance:off
heroku open
# remove old database
heroku addons:remove HEROKU_POSTGRESQL_OLDCOLOR
Note that if you compare your data size between them, the new one may be much smaller because of efficiencies in 9.2. (My 9.2 was about 70% of the 9.1.)
Heroku followers are, AFAIK, just PostgreSQL streaming replica servers. This means you can't use them across versions, you must have binary-compatible databases.
The same techniques should apply as ordinary PostgreSQL, except that you may not be able to use pg_upgrade on Heroku. This requires shell (ssh, etc) access as the postgres user on the system that hosts the database, so I doubt it's possible on Heroku unless they've provided a tool to run pg_upgrade for you. I can't find much information on this.
You will probably have to look at using Slony-I, Bucardo, or another trigger-based replication solution to do the upgrade unless you can find a way to run pg_upgrade on a Heroku database instance. The general idea is that you set up a new 9.2 instance, use Slony to clone data from the 9.1 instance into it, then once they're fully in sync you stop the 9.1 instance, remove the Slony triggers, and switch clients over to the 9.2 instance.
Search for more information on "postgresql low downtime upgrade slony" etc, see how you go.