heroku pg: pull not fetching tables from heroku database - heroku

I'm trying to pull a heroku database to my local Windows computer by using heroku bash command
heroku pg:pull HEROKU_POSTGRESQL_COLOR mydatabase --app appname,
when I running above command I get the following error:
'env' is not recognized as an internal or external command, operable program or batch file.!
But local database 'mydatabase' is created, but without any tables.
My heroku app's database has a table in it, but it is not getting pulled to my local database.
Help me to solve it.

a couple of things:
1.When there is an error such as "'env' is not recognized as an internal or external command, operable program or batch file" it means that the system is trying to execute a command named env. This has nothing to do at all with setting up your environment variables.
Env is not a command in windows, but in unix. I understand that you have a windows machine though. What you can do is run "git bash". (You could get it by itself but it comes with Heroku's CLI).
This gives you a unix-like environment where the "env" command is supported, and then you can run the actual heroku pg:pull command.
2.If that still doesn't work, there is a workaround which works,without installing anything extra. Actually this is based on a ticket which I submitted to Heroku so I'm just going to quote their response:
"The pg:push command is just a wrapper around pg_dump and pg_restore commands. Due to the bug you encountered, it sounds like we should go ahead and do things manually. Run these using cmd.exe (The Command Prompt application you first reported the bug). First grab the connection string from your heroku application config vars.
heroku config:get DATABASE_URL
Then you want to pick out the username / hostname / databasename parts from the connection string, ie: postgres:// username : password # hostname : port / databasename. Use those variables in the following command and paste in the password when prompted for one. This will dump the contents of your heroku database for a local file.
pg_dump --verbose -F c -Z 0 -U username -h hostname -p port databasename > heroku.dump
Next you will load this file into your local database. One thing that the CLI does before running this command is to check and make sure the target database is empty, because running this against a database with real data is something you want to avoid so be careful with pg_restore. When running this manually you run the risk of mangling your data without the CLI check, so you may want to manually verify that the target database is empty first.
pg_restore --verbose --no-acl --no-owner -h localhost -p 5432 -d mydb2 < heroku.dump
I am sorry this is not a better experience, I hope this will help you make progress. We are in the process of rewriting our pg commands so that they work better on all platforms including windows, but there is no solid timeline for when this will be completed."

For taking backup like dump file in heroku firstly you need the backups addon, installing..
$heroku addons:add pgbackups
Then running below command will give you dump file in the name of latest
$ heroku pgbackups:capture
$ curl -o latest.dump `heroku pgbackups:url`
or
wget "`heroku pgbackups:url --app app-name`" -O backup.dump
Edited:(After chatting with user,)
Problem: 'env' is not recognized as an internal or external command, operable program or batch file.!
I suspected that one of the PATH variable to particular program is messed up. You can double click and check that in WINDOWS\system32 folder.
Ok so How to edit it:
My Computer > Advanced > Environment Variables
Then choose PATH and click edit button

Related

Shell Script Issue Running Command Remotely using SSH

I have a deploy script in which I want to clear the cache of my CDN. When I am on the server and run my script everything is fine, however when I SSH in and run only that file (i.e. not actually getting into the server, cding into the directory and running it) it fails and states the my doctl command cannot be found. This seems to only be an issue with this program over ssh, running systemctl --help works fine.
Please note that I have installed Digital Ocean's doctl using sudo snap install doctl and it is there.
Here is the .sh file (minus comments):
#!/bin/sh
doctl compute cdn flush [MYID] --files [*] # static cache
So I am not sure what the issue is. Anybody have an idea?
Again, if I get into the server and run the file all works, but here is the SSH command I use that returns the error:
ssh root#123.45.678.999 "/deploy/clear_digital_ocean_cache.sh"
And here is the error.
/deploy/clear_digital_ocean_cache.sh: 10: doctl: not found
Well one solution was to change the command to be an absolute path inside my .sh file like so:
#!/bin/sh
/snap/bin/doctl compute cdn flush [MYID] --files [*] # static cache
I realized that I could run my user commands with ssh (like systemctl) so it was either change where doctl was located (i.e. in the user bin) or ensure that the command was called with an absolute path adding the /snap/bin/ in front of the command.

Using heroku pg:backups:restore to Import to Heroku Postgres

I am trying to copy a local PostgreSQL database to Heroku per this article.
Here is what I have done:
1. Make a dump file
pg_dump -Fc --no-acl --no-owner -h localhost -U postgres mydb > mydb.dump
2.Upload dump file to aws my-bucket-name/db-backup folder.
aws s3 cp mydb.dump s3://my-bucket-name/db-backup/mydb.dump
3. Generate a signed URL:
aws s3 presign s3://my-bucket-name/db-backup/mydb.dump --region us-east-2
4. Verify that the signed URL is accessible.
Navigate to the presigned URL in an incognito tab of a browser. It works.
5. Back up to Heroku using the generated signed URL
I am using double quotes around GENERATED_URL because I'm on Windows:
heroku pg:backups:restore --app my-app-name --confirm my-app-name "GENERATED_URL"
For example:
heroku pg:backups:restore --app my-app-name --confirm my-app-name "https://s3.us-east-2.amazonaws.com/s3.console.aws.amazon.com/s3/buckets/my-bucket-name/db-backup/mydb.dump?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIABCDVKE2GXCY3YXL7V%2F20200934%2Fus-east-2%2Fs3%2Faws4_request&X-Amz-Date=20200924T164718Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=fb2f51c0d7fbe1234e3740cf23c37f003575d968a1e4961684a47ac627fbae2e"
THE RESULT
I get the following errors:
Restoring... !
! An error occurred and the backup did not finish.
!
! Could not initialize transfer
!
! Run heroku pg:backups:info r021 for more details.
'X-Amz-Credential' is not recognized as an internal or external command,
operable program or batch file.
'X-Amz-Date' is not recognized as an internal or external command,
operable program or batch file.
'X-Amz-Expires' is not recognized as an internal or external command,
operable program or batch file.
'X-Amz-SignedHeaders' is not recognized as an internal or external command,
operable program or batch file.
'X-Amz-Signature' is not recognized as an internal or external command,
operable program or batch file.
I've found others with similar problems, but no solutions. Thanks in advance to anyone who can help.
This is resolved. There were two issues.
PowerShell wasn't properly escaping characters. So, I switched to CMD.
The dump file was invalid.
This line of code produced an invalid dump file:
pg_dump -Fc --no-acl --no-owner -h localhost -U postgres mydb > mydb.dump
Instead, I needed to use the following syntax:
pg_dump -Fc --no-acl --no-owner -h localhost -U postgres -d mydb -f mydb.dump
After making that change, all worked smoothly.
For what it's worth, I had the same issue and my solution was to copy the S3 URL which is formatted as https://s3.amazonaws.com/<bucket_name>/<dump_file>.dump. For some reason the pre-signed URL approach did not work but the public URL did.

Is there a way to run one bash file which executes commands in local and server terminal also

I am running the bash files to make a Mongo dump on daily bases.But In local directory I am running a one bash file which connects to server terminal.And in server terminal I am running the other file which makes a Mongo dump.
But is it possible to make one file which connects to MongoDB server terminal and run the commands on the sever.
I tried with many commands but it was not possible to run the commands on the server terminal with one bash file, when the server terminal opens up then the left over commands does not execute.
Is it possible to do one bash file and execute the server commands on the server..?
Connect to your DB remotely using this command :
mongo --username username --password secretstuff --host YOURSERVERIP --port 28015
You can then automate this by including your pertaining commands ( including the above ) in a bash script that you can run from anywhere.
To solve the above problem, answer from Matias Barrios seems to be correct for me. You don't use a script on the server, but use tools on your local machine that connect to the server services and manage them.
Nevertheless, to execute a script on a distant server, you could use ssh. This is not the right solution in your case, but answer the question in your title.
ssh myuser#MongoServer ./script.sh param1
This can be used in a local script and execute script.sh on the server MongoServer (with param1 and) with system privileges of the user myuser.
Beforehand, don't forget to avoid password request with
ssh-copy-id myuser#MongoServer
This will copy your ssh public key in the myuser directory of the MongoServer

Unable to run psql command from within a BASH script

I have run into a problem with the psql command in my BASH script as I am trying to login to my local postgres database and submit a query. I am using the command in the following way:
psql -U postgres -d rebasoft_appauditor -c "SELECT * FROM katan_scripts"
However, I get the following error message.
psql: FATAL: Ident authentication failed for user "postgres"
This runs perfectly fine from the command line after I appended the following changes to /var/lib/pgsql/data/pg_hba.conf:
local all all trust
host all all 127.0.0.1/32 trust
Also, could this please be verified for correctness?
I find it rather strange that database authentication works fine on the command line but in a script it fails. Could anyone please help with this?
Note: I am using MAC OSX
It might possibly depend on your bash script.
Watch for the asterisk (*) not be replaced with the file names in your current directory. And possibly a semicolon or \g might help to actually send the SQL statement to the database server.

How can I download a file from Heroku bash?

I ran a ruby script from Heroku bash that generates a CSV file on the server that I want to download. I tried moving it to the public folder to download, but that didn't work. I figured out that after every session in the Heroku bash console, the files delete. Is there a command to download directly from the Heroku bash console?
If you manage to create the file from heroku run bash, you could use transfer.sh.
You can even encrypt the file before you transfer it.
cat <file_name> | gpg -ac -o- | curl -X PUT -T "-" https://transfer.sh/<file_name>.gpg
And then download and decrypt it on the target machine
curl https://transfer.sh/<hash>/<file_name>.gpg | gpg -o- > <file_name>
There is heroku ps:copy:
#$ heroku help ps:copy
Copy a file from a dyno to the local filesystem
USAGE
$ heroku ps:copy FILE
OPTIONS
-a, --app=app (required) app to run command against
-d, --dyno=dyno specify the dyno to connect to
-o, --output=output the name of the output file
-r, --remote=remote git remote of app to use
DESCRIPTION
Example:
$ heroku ps:copy FILENAME --app murmuring-headland-14719
Example run:
#$ heroku ps:copy app.json --app=app-example-prod --output=app.json.from-heroku
Copying app.json to app.json.from-heroku
Establishing credentials... done
Connecting to web.1 on ⬢ app-example-prod...
Downloading... ████████████████████████▏ 100% 00:00
Caveat
This seems not to run with dynos that are run via heroku run.
Example
#$ heroku ps:copy tmp/some.log --app app-example-prod --dyno run.6039 --output=tmp/some.heroku.log
Copying tmp/some.log to tmp/some.heroku.log
Establishing credentials... error
▸ Could not connect to dyno!
▸ Check if the dyno is running with `heroku ps'
It is! Prove:
#$ heroku ps --app app-example-prod
=== run: one-off processes (1)
run.6039 (Standard-1X): up 2019/08/29 12:09:13 +0200 (~ 16m ago): bash
=== web (Standard-2X): elixir --sname dyno -S mix phx.server --no-compile (2)
web.1: up 2019/08/29 10:41:35 +0200 (~ 1h ago)
web.2: up 2019/08/29 10:41:39 +0200 (~ 1h ago)
I could connect to web.1 though:
#$ heroku ps:copy tmp/some.log --app app-example-prod --dyno web.1 --output=tmp/some.heroku.log
Copying tmp/some.log to tmp/some.heroku.log
Establishing credentials... done
Connecting to web.1 on ⬢ app-example-prod...
▸ ERROR: Could not transfer the file!
▸ Make sure the filename is correct.
So I fallen back to using SCP scp -P PORT tmp/some.log user#host:/path/some.heroku.log from the run.6039 dyno command line.
Now that https://transfer.sh is defunct, https://file.io is an alternative. To upload myfile.csv:
$ curl -F "file=#myfile.csv" https://file.io
The response will include a link you can access the file at:
{"success":true,"key":"2ojE41","link":"https://file.io/2ojE41","expiry":"14 days"}
I can't vouch for the security of file.io, so using encryption as described in other answers could be a good idea.
Heroku dyno filesystems are ephemeral, non-persistant and not shared between dynos. So when you do heroku run bash, you actually get a new dyno with a fresh deployment of you app without any of the changes made to ephemeral filesystems in other dynos.
If you want to do something like this, you should probably either do it all in a heroku run bash session or all in a request to a web app running on Heroku that responds with the CSV file you want.
I did as the following:
First I entered heroku bash with this command:
heroku run 'sh'
Then made a directory and moved the file to that
Made a git repository and commited the file
Finally I pushed this repository to github
Before commiting, git will ask you for your name and email. Give it something fake!
If you have files bigger than 100 Mg, push to gitlab.
If there is an easier way please let me know!
Sorry for my bad english.
Another way of doing this (that doesn't involve any third server) is to use Patrick's method but first compress the file into a format that only uses visible ASCII charaters. That should make it work for any file, regardless of any whitespace characters or unusual encodings. I'd recommend base64 to do this.
Here's how I've done it:
Log onto your heroku instance using heroku run bash
Use base64 to print the contents of your file: base64 <your-file>
Select the base64 text in your terminal and copy it
On your local machine decompress this text using base64 straight into a new file (on a mac I'd do pbpaste | base64 --decode -o <your-file>)
I agree that most probably your need means a change in your application architecture, something like a worker dyno.
But by executing the following steps you can transfer the file, since heroku one-off dyno can run scp:
create vm in a cloud provider, e.g. digital ocean;
run heroku one-off dyno and create your file;
scp file from heroku one-off dyno to that vm server;
scp file from vm server to your local machine;
delete cloud vm and stop heroku one-off dyno.
I see that these answers are much older, so I'm assuming this is a new feature. For all those like me who are looking for an easier solution than the excellent answers already here, Heroku now has the capability to copy files quite easily with the following command: heroku ps:copy <filename>
Note that this works with relative paths, as you'd expect. (Tested on a heroku-18 stack, downloading files at "path/to/file.ext"
For reference: Heroku docs
Heroku dyno's come with sftp pre-installed. I tried git but was too many steps (had to generate a new ssh cert and add it to github every time), so now I am using sftp and it works great.
You'll need to have another host (like dreamhost, hostgator, godaddy, etc) - but if you do, you can:
sftp username#ftp.yourhostname.com
Accept the server fingerprint/hash, then enter your password.
Once on the server, navigate to the folder you want to upload to (using cd and ls commands).
Then use the command put filename.csv and it will upload it to your web host.
To retrieve your file: Use an ftp client like filezilla or hit the url if you uploaded to a folder in the www or website folder path.
This is great because it also works with multiple files and binaries as well as text files.
For small/quick transfers that fit comfortably in the clipboard:
Open a terminal on your local device
Run heroku run bash
(Inside your remote connection, on the dyno) Run cat filename
Select the lines in your local terminal and copy them to your clipboard.
Check to ensure proper newlines when pasting them.
Now i created shell script to upload some files from to git backup repo (for example, my app.db sqlite file is gitignored and every deploy kills it)
## upload dyno files to git via SSH session
## https://devcenter.heroku.com/changelog-items/1112
# heroku ps:exec
git config --global user.email 'dmitry.cheva#gmail.com'
git config --global user.name 'Dmitry Cheva'
rm -rf ./.gitignore
git init
## add each file separately (-f to add git ignored files)
git add app.db -f
git commit -m "backup on `date +'%Y-%m-%d %H:%M:%S'`"
git remote add origin https://bitbucket.org/cheva/appbackup.git
git push -u origin master -f
The git will reboot after the deploy and does not store the environment, you need to perform the first 3 commands.
Then you need to add files (-f for ignored ones) and push into repo (-f, because the git will require pull)

Resources