Heroku DB deployment - heroku

I'm attempting my first Heroku deployment and am having trouble
heroku pg:backups:restore "https://s3.us-east-2.amazonaws.com/myusername/POSTGRESQL.dump"
DATABASE_URL --app MyAppName
I receive the error
! An error occurred and the backup did not finish.
!
! pg_restore: [archiver] did not find magic string in file header
! pg_restore finished with errors
! waiting for download to complete
! download finished with errors
! please check the source URL and ensure it is publicly accessible
!
! Run heroku pg:backups:info r006 for more details.
And sometimes the error is:
Starting restore of https://s3.us-east-2.amazonaws.com/talXXXXXXXX to postgresql-XXXXXXXXX... done
Use Ctrl-C at any time to stop monitoring progress; the ba
Use heroku pg:backups to check progress.
Stop a running restore with heroku pg:backups:cancel.
Restoring... !
! An error occurred and the backup did not finish.
!
! waiting for restore to complete
! pg_restore finished with errors
! waiting for download to complete
! download finished with errors
! please check the source URL and ensure it is publicl
!
! Run heroku pg:backups:info r015 for more details.
I have confirmed from various browsers that the url is accessible to the public and I can download the file. I'm using double quotes around the URL as recommended for Windows, what am I doing wrong?

I was facing the same problem. Turn out it was a problem with my dump file. I wasn't compressing it properly.
--format=c selects custom as the format output (which is the same as -Fc). It compress the file by default, but it wasn't enough so I also used the --compress flag.
This flag tells the level of the compression; it goes from 0 (lighter) to 9 (heavier).
I used 9, just in case, and my command end up like that
pg_dump --format=c --compress=9 --no-acl --no-owner -h THE_HOST -U YOUR_USER THE_DATABASE > YOUR_FILE.dump
And it worked.

In the end I couldn't work out how to import to Heroku as they advertise it can be done at https://devcenter.heroku.com/articles/heroku-postgres-import-export#import
So I used a DB connection client like DBeaver, converted my DB dump into an SQL script and ran the script manually to import the data.
:(

I was able to get this working by creating the backup through PgAdmin4, rather than through the console dump command on Heroku's official tutorial.

I also could only get this working through PgAdmin, here's the command line that the backup job in PgAdmin used. Windows 10 but the pg_dump options should work on any OS.
C:\Program Files\PostgreSQL\13\bin\pg_dump.exe --file "mydump.dump" --host "localhost" --port "5432" --username "postgres" --no-password --verbose --format=c --blobs --compress "8" --no-owner --section=pre-data --section=data --section=post-data --no-privileges --no-tablespaces --no-unlogged-table-data --no-comments "mydatabase"
I then uploaded that to a public Azure blob file and restored it like this:
heroku pg:backups:restore "https://redacted.blob.core.windows.net/piis2/mydump.dump" DATABASE_URL

Related

PostgreSQL "Database cluster initialisation failed"

I got the error in question on my Windows11 for Postgres-V-12.
The followed the below steps suggested by ASL:
initdb -D "D:\PostgreSql\12\data" -U postgres
pg_ctl start -D "D:\PostgreSql\12\data"
Now after the last step I am getting an error message;
It says:
waiting for server to start....
postgres: could not find the database system
Expected to find it in the directory "D:/postgresqldata", but could not open file "D:/postgresqldata/global/pg_control": No such file or directory
stopped waiting
pg_ctl: could not start server.
Can someone please help me to resolve this issue?
Please check the service postgresql status in windows services.
Ensure it is running.

certificate has expired error whilst running heroku commands

I'm trying to do the following with my hosted app on Heroku-
create a database backup
download the database backup
Restore the backup in a local postgres database
However, I get stuck in the first step itself. Running the below command throws the following error -
heroku pg:backups:capture -a app-name
CERT_HAS_EXPIRED: certificate has expired
I even tried running the following command, however, that did not help either -
heroku run:detached pg:backups capture –a app-name
Running pg:backups:capture on ⬢ app-name... done, run.1879 (Free)
Run heroku logs --app app-name --dyno run.1879 to view the output.
(env) E:\new_website\>heroku logs --app app-name --dyno run.1879
2021-10-29T05:18:36.001962+00:00 heroku[run.1879]: Starting process with command pg:backups:capture
2021-10-29T05:18:36.564987+00:00 heroku[run.1879]: State changed from starting to up
2021-10-29T05:18:36.953570+00:00 app[run.1879]: bash: pg:backups:capture: command not found
2021-10-29T05:18:37.068925+00:00 heroku[run.1879]: Process exited with status 127
2021-10-29T05:18:37.103435+00:00 heroku[run.1879]: State changed from up to complete
Finally, I also tried using the HEROKU_DEBUG environment variable to see what the real error was -
(env) E:\new_website\>SET HEROKU_DEBUG=1
(env) E:\new_website\>heroku pg:backups:capture --app app-name
Adding the following trusted certificate authorities
E:\ap01\Ruby\cacert.pem
--> POST /actions/addon-attachments/resolve
--> {"app":"neevista-web","addon_attachment":"DATABASE_URL","addon_service":"heroku-postgresql"}
<-- 200 OK
<-- [{"addon":{"id":"9ccc7a6d-8001-473a-8ef1-e24614ad26c0","name":"postgresql-curly-92807","app":{"id":"587ec79f-3989-40eb-bceb-17220824a275","name":"app-name"},"plan":{"id":"062a1cc7-f79f-404c-9f91-135f70175577","name":"heroku-postgresql:hobby-dev"}},"app":{"id":"587ec79f-3989-40eb-bceb-17220824a275","name":"app-name"},"id":"5a2ba589-22e6-4c42-a3bd-d634b4581eb5","name":"DATABASE","namespace":null,"created_at":"2021-06-21T15:05:48Z","updated_at":"2021-06-21T15:05:48Z","web_url":"https://addons-sso.heroku.com/apps/587ec79f-3989-40eb-bceb-17220824a275/addons/9ccc7a6d-8001-473a-8ef1-e24614ad26c0","log_input_url":null,"config_vars":["DATABASE_URL"]}]
Adding the following trusted certificate authorities
E:\ap01\Ruby\cacert.pem
--> GET /client/v11/databases/9ccc7a6d-8001-473a-8ef1-e24614ad26c0
! CERT_HAS_EXPIRED: certificate has expired
Error: certificate has expired
at TLSSocket.onConnectSecure (_tls_wrap.js:1502:34)
at TLSSocket.emit (events.js:314:20)
at TLSSocket._finishInit (_tls_wrap.js:937:8)
at TLSWrap.ssl.onhandshakedone (_tls_wrap.js:711:12)
I've tried updating Heroku CLI, restarting the Heroku app but nothing has helped. Not sure if this helps but I've two versions (9 and 13) of Postgresql running on this windows machine and my app is trying the use the version 13.
Could you please advise what I'm doing wrong here or if there is an ongoing issue at your end?
I finally figured it out myself after Heroku support failed to assist as they classified the issue as a local problem.
For some reason, I'd setup SSL_CERT_FILE environment variable previously. This environment variable was pointing to a certificate pem file -> E:\ap01\Ruby\cacert.pem
After deleting this environment variable heroku commands started to work again.

Error with starting PostgreSQL database on macOS

I am running the latest version of macOS Sierra and I installed PostgreSQL via brew. Then I ran the command:
pg_ctl -D /Users/tmo/PSQL-data -l logfile start
but received for output:
waiting for server to start..../bin/sh: logfile: Permission denied
stopped waiting
pg_ctl: could not start server
Examine the log output.
EDIT: After restarting my operating system and rerunning the command... I'm now receiving a slightly modified output... but the modification is significant.
waiting for server to start.... stopped waiting
pg_ctl: could not start server
Examine the log output.
Where is the "log output" stored?
How do I make this command work?
The problem could be one of two things, that I can see:
A typo in your database path:
/Users/tmo/PSQL-data --> /Users/tmp/PSQL-data
If the above was just a transcription error, I would guess that your postgres user doesn't have write access to the directory where you are setting the logfile. The argument following the -l switch tells PG where to save the logfile. When you don't provide the -l switch with a path, but just a filename, it will use the same dir you use to specify the database cluster (with the -D flag). So in this case, PG is trying to write to /Users/tmp/PSQL-data/logfile, and getting a permission error.
To fix this, I would try:
If the directory /Users/tmp/PSQL-data/ doesn't exist:
sudo mkdir /Users/tmp/PSQL-data
Then create the logfile manually:
sudo touch /Users/tmp/PSQL-data/logfile
Then make the postgres user own the file (I'm assuming user is postgres here)
sudo chown postgres /Users/tmp/PSQL-data/logfile
Try again, and hopefully you can launch the server.
Caveat: I'm not a macOS user, so I'm not sure how the /tmp folder behaves. If it is periodically cleared, you may want to specify a different logfile location, so that you don't need to create and chown the file each time you need to launch the cluster.

Heroku ps:exec won't connect

I'm quite a newbie on heroku. So, sorry for the silly question.
My problem is, I don't know what is the error, but one thing for sure I couldn't connect through ssh to my heroku server. If possible I would like to know how to get more detailed error log or something like that. I've tried adding -v or --verbose switch to heroku command but no luck.
Bellow is the complete action I did, well, Basically I followed what is written in the heroku's site.
Here's how to connect through ssh according to heroku's tutorial site.
xxx#yyy:~/xwprog/heroku-sample-gradle1
$ heroku ps:exec
Establishing credentials... done
Connecting to web.1 on ⬢ guarded-fjord-42322...
▸ There was an error connecting to the dyno!
process currently running
xxx#yyy:~/xwprog/heroku-sample-gradle1
$ heroku ps
Free dyno hours quota remaining this month: 550h 0m (100%)
For more information on dyno sleeping and how to upgrade, see:
https://devcenter.heroku.com/articles/dyno-sleeping
=== web (Free): build/install/gradle-getting-started/bin/gradle-getting-started (1)
web.1: up 2018/03/10 16:56:04 +0700 (~ 16m ago)
Tried to run bash, but it's actually work. The downside was each time I disconnected from it, the state also got reset, and prone to get disconnected, with error message ECONNRESET: read ECONNRESET
xxx#yyy:~/xwprog/heroku-sample-gradle1
$ heroku run bash
Running bash on ⬢ guarded-fjord-42322... up, run.3587 (Free)
~ $
Thank you.

How can I push .Net Framework API to PCF?

I am trying to push my .Net Native API to Pivotal Cloud Foundry. Below is the command I am giving for pushing my API.
cf push API-Name -s windows2012R2 -b binary_buildpack -c "start" -m 1G -p C:/Path
While running it will say "No start command detected" but when I did -c ? it showed me that start was a command. Then when I look at the log file it will show me:
ERR Could not determine a start command. Use the -c flag to 'cf push' to specify a custom start command.
and at the end it will say:
ERR Failed to create container
"reason"=>"CRASHED", "exit_description"=>"failed to initialize container"
Am I running the command wrong or is there something I need to do to my API to make it compatible?
I figured out that I had to set the health check off and my app and all instances are started now.
cf set-health-check NAME none

Resources