Restarting an interrupted heroku db:pull - heroku

I have a decently large DB that I'm trying to pull down locally from heroku via db:pull.
I never can stick around my machine long enough to keep it from going to sleep, effectively killing the connection and terminating the process. GOTO 1.
I know I could change my system settings to stop my computer from sleeping, which would keep the connection alive, but is there a way to continue a previous pull?
Or maybe the solution is just not to use db:pull for a large db.

heroku db:pull supports resuming. When you start a pull it will create a .dat file in your project (and get rid of it when it's completed). You can do:
heroku db:pull --resume FILE # resume transfer described by a .dat file
to start the pull from the previous location.
Heroku pgbackups maybe a better option to grab the large Db file - http://devcenter.heroku.com/articles/pgbackups.
Although I'd be more inclined to prevent your computer from sleeping - just disable the sleep functionality during the downloading from settings/control panel depending on OS.

Related

Retry a transaction on Candy Machine

I am just finishing an upload of 8000 assets to candy machine (via the upload command). Everything seemed to be working well when it was creating the bundles and saving them to the cache, but once it started to write the indices I've started seeing these two errors on and off:
1)
Waiting 5 seconds to check Bundlr balance.
Requesting a withdrawal of 0.638239951 SOL from Bundlr...
Successfully withdrew 0.638244951 SOL.
Writing all indices in 719 transactions...
Progress: [█░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░] 2% | 18/719Transaction simulation failed: Blockhash not found
Failed writing indices 3682-3691: Transaction was not confirmed in 60.01 seconds. It is unknown if it succeeded or fail.
I have been searching the internet and from what I can tell these errors are out of my control..is this correct? Or what can I do to get these indices to write successfully? Its at 50% progress right now but I assume the upload is not going to be successful when it finishes. If this is the case, do I need to run the candy machine upload command all over again or is there a way for me to just run the transaction portion (where it started to fail) again? I've seen some notes on retry but it wasn't completely clear to me.
The upload process took about 2.5 hours so would like to avoid that if at all possible.
Help is very much appreciated.
Both errors are common so you dont have to worry about it. You should use a custom RPC (using --rpc-url on the upload command) and wait till the upload command ends. When the upload command ends you have to use verify_upload command in order to see if everything went well (if verify_upload shows an error you have to run upload again and repet till verify_upload shows ready to deploy message).

writing file in heroku filesystem and reading it with web app

I have a worker process in my app running a script every hour. This script writes data to the file system which the web app uses to update its contents. I've noticed that although the worker runs the process successfully, the data is not being updated. Is this at all related to the fact that heroku's file system is read only? If so, how can I write this file without having to get into databases?
Assuming your web and worker processes are instantiated in separate rows in your procfile, they are running on separate Heroku dynos, each with its own file system.
So, your web process does not have access to your worker process's file system, or vice versa.
Furthermore, even if you ran both processes on the same dyno (which is possible, but NOT recommended), you still could not use the local dyno file system to reliably transfer information from one process to another, since Heroku dynos can and will recycle without prior warning at least once every 24 hours or so, and when they do so any files you wrote to the local file system before the recycle disappear.
Bottom line: You should not and cannot use the local Heroku file system for what you are trying to do. Instead, you need to use some kind of stateful backing service (such as e.g Heroku Redis).

VB6 application keeps lock on Access (.mdb) database after creation, causing an error 3028

Our application builds an Access database (.mdb) and then starts a different application with the Shell command which needs Read/Write Access to this very database. The problem is that on some systems our application seems erratically to retain an exclusive lock on the database, preventing the other application from accessing it. Only after closing down the first application can the other application proceed.
The specific Error that is raised is Error 3028, which seems to be specific for DAO 3.51 (Access '97) which we indeed employ. I cannot understand why some systems are affected (and then not consistently) and others never. I thought that it might be a timing issue and built in a Sleep period between building the database and launching the other application, but that does not help.
What is going on?
EDIT:
I now created a workaround by creating the database in a separate file and then copying it. Now the second program should always be able to access it and any remaining lock problems will surface in the first program, which I maintain. I will follow up later when our users have been able to test this.
Are you closing the connection to the DB before passing control to another EXE?
I had a similar issue previously which wasn't quite the same but from what you have described this is the approach I would try:
Before lauching the secondary application with the shell command.
Alongside the sleep period you have already employed you will also need to close the original program which generated the .mdb file.
I achieved this by shelling a windows batch file, and then immediately exiting the original program.
Batch file makeup as follows:
ping -n 5 localhost >NUL
start MSAccess.exe "C:\DB.mdb"
exit
This allows 5 seconds for the mdb file to be freed-up before launching, you could replace my Ms Access call with your secondary program.

Why are there open connections on my Heroku app's PostgreSQL database? How do I close them?

My Heroku app is www.inflationtrends.com.
Usually, when I run "pg:info" in Git Bash to see how many connections there are, that number is zero.
Recently, I've seen a spike in traffic -- not much, only a little over 1,000 in the past 48 hours -- and when I ran "pg:info" this morning (around 11 a.m. Eastern time), the result shows that there are 4 or 5 open connections.
My app is run using the Ruby gem Sinatra. In the Sinatra file, I have the following code:
after do
DB.disconnect
end
The "after do" loop disconnects from the PostgreSQL database after a page is loaded.
The variable "DB" has the connection info for my PostgreSQL database (username, password, host, port number, SSL mode requirement):
DB = Sequel.postgres(
db_name,
:user=>user,
:password=>password,
:host=>host,
:port=>port,
:sslmode=>sslmode
)
Is there some reason that there are open connections? Are there ways to close these connections? Are there more efficient ways to handle this situation?
An alternate way to check the number of open connections on Heroku is to type this into your console/terminal and replace "myapp" with your app's name:
heroku pg:info -a myapp
Have you considered that perhaps your site is getting traffic? When people visit your site and use your application connections will be opened.
Try adding some tracking code (such as Google Analytics) to your web pages, then check if the number of recorded visitors matches the number of open connections.
It is also possible that the database has connections opened by various maintenance tasks, such as backing up.
I grabbed the following toolbelt add-on which worked perfectly.
https://github.com/heroku/heroku-pg-extras#usage
heroku pg:killall --app xyz

MySQL database backup: performance issues

Folks,
I'm trying to set up a regular backup of a rather large production database (half a gig) that has both InnoDB and MyISAM tables. I've been using mysqldump so far, but I find that it's taking increasingly longer periods of time, and the server is completely unresponsive while mysqldump is running.
I wanted to ask for your advice: how do I either
Make mysqldump backup non-blocking - assign low priority to the process or something like that, OR
Find another backup mechanism that will be better/faster/non-blocking.
I know of the existence of MySQL Enterprise Backup product (http://www.mysql.com/products/enterprise/backup.html) - it's expensive and this is not an option for this project.
I've read about setting up a second server as a "replication slave", but that's not an option for me either (this requires hardware, which costs $$).
Thank you!
UPDATE: more info on my environment: Ubuntu, latest LAMPP, Amazon EC2.
If replication to a slave isn't an option, you could leverage the filesystem, depending on the OS you're using,
Consistent backup with Linux Logical Volume Manager (LVM) snapshots.
MySQL backups using ZFS snapshots.
The joys of backing up MySQL with ZFS...
I've used ZFS snapshots on a quite large MySQL database (30GB+) as a backup method and it completes very quickly (never more than a few minutes) and doesn't block. You can then mount the snapshot somewhere else and back it up to tape, etc.
Edit: (previous answer was suggestion a slave db to back up from, then I noticed Alex ruled that out in his question.)
There's no reason your replication slave can't run on the same hardware, assuming the hardware can keep up. Grab a source tarball, ./configure --prefix=/dbslave; make; make install; and you'll have a second mysql server living completely under /dbslave.
EDIT2: Replication has a bunch of other benefits, as well. For instance, with replication running, you'll may be able to recover the binlog and replay it on top your last backup to recover the extra data after certain kinds of catastrophes.
EDIT3: You mention you're running on EC2. Another, somewhat contrived idea to keep costs down is to try setting up another instance with an EBS volume. Then use the AWS api to spin this instance up long enough for it to catch up with writes from the binary log, dump/compress/send the snapshot, and then spin it down. Not free, and labor-intensive to set up, but considerably cheaper than running the instance 24x7.
Try mk-parallel-dump utility from maatkit (http://www.maatkit.org/)
regards,
Something you might consider is using binary logs here though a method called 'log shipping'. Just before every backup, issue out a command to flush the binary logs and then you can copy all except the current binary log out via your regular file system operations.
The advantage with this method is your not locking up the database at all, since when it opens up the next binary log in sequence, it releases all the file locks on the prior logs so processing shouldn't be affected then. Tar'em, zip'em in place, do as you please, then copy it out as one file to your backup system.
An another advantage with using binary logs is you can restore up to X point in time if the logs are available. I.e. You have last year's full backup, and every log from then to now. But you want to see what the database was on Jan 1st, 2011. You can issue a restore 'until 2011-01-01' and when it stops, your at Jan 1st, 2011 as far as the database is concerned.
I've had to use this once to reverse the damage a hacker caused.
It is definately worth checking out.
Please note... binary logs are USUALLY used for replication. Nothing says you HAVE to.
Adding to what Rich Adams and timdev have already suggested, write a cron job which gets triggered on low usage period to perform the slaving task as suggested to avoid high CPU utilization.
Check mysql-parallel-dump also.

Resources