I've a site that is hosted in Heroku. It is an Ecommerce site that uploads user's designs (JPG, PNG, PSD files or AI files, PDFs, etc) but this are stored in Amazon and not heroku.
Right now I'm trying to get my costs down, so I'm checking the resources and I see this:
As you can see I've a Heroku Postgress Addon Standard 0 that costs $50 monthly.
I'm trying to bring the Heroku Postgress Addon cost down.
According to this Heroku Page, there are Postgres Plans at $9.
However it has a Row Limit and offers no Storage Capacity.
How this could affect my app?
How do I change to this plan?
heroku pg:info
=== DATABASE_URL, HEROKU_POSTGRESQL_GOLD_URL
Plan: Standard 0
Status: Available
Data Size: 18.3 MB
Tables: 26
PG Version: 10.7
Connections: 8/120
Connection Pooling: Available
Credentials: 1
Fork/Follow: Available
Rollback: earliest from 2019-04-30 19:23 UTC
Created: 2019-03-05 16:08 UTC
Region: us
Data Encryption: In Use
Continuous Protection: On
Maintenance: not required
Maintenance window: Fridays 18:30 to 22:30 UTC
Add-on: postgresql-polished-46417
=== HEROKU_POSTGRESQL_BROWN_URL
Plan: Hobby-dev
Status: Available
Connections: 0/20
PG Version: 10.6
Created: 2019-02-14 17:19 UTC
Data Size: 7.6 MB
Tables: 0
Rows: 0/10000 (In compliance)
Fork/Follow: Unsupported
Rollback: Unsupported
Continuous Protection: Off
Add-on: postgresql-opaque-83191
Billing:
In my Billing acount I'm seeing that for march and april I'm beeing charged more than $50 USD.
How can I limit this, to lets say $20 USD monthly?
** We're only expecting 100 visits top!**
However, according to this medium post visits is not a metric for which we're beeing charged. I just mentioned the 100 visits top as a reference.
The free plan limits to 10k rows, correct, and once you've reached that limit, your inserts will start to fail. Been there.
It's really just as it's written. 10k rows, no storage limits. You could have a text column with a huge json or file in there and it wouldn't matter in storage, as long as it is under the row count limit.
You would be affected only when having to upgrade it to a bigger plan, say the 9$ which would give you 1M rows, as it's not an upgrade of your database but a migration to a new database, which you'd have to perform.
So in order to migrate you would have to put your app in maintenance mode, add the 9$ database, make it 'follow' your free database, wait a couple minutes as the 9$ database updates itself with the free database data, then make the 9$ database stop following the free database and lastly switch your app to the new 9$ database.
This last step, if you're using the DATABASE_URL environment variable is transparent for the app. Just detach the free database and attach the 9$ database, take your app out of maintenance mode and you're done.
If you can have an hour or two of downtime, it's worth the savings. You can script this migration as it only uses heroku commands.
Another easy way to have a free and reliable database is to use AWS. Since heroku is inside AWS you just have to setup an RDS in the same region as your app is and switch the connection. Then you would have a free 30gb database for a year.
Related
I would like to plan and test my database recovery in another site (another instance on another server in disaster recovery site).
I take a monthly RMAN level 0 image copy every month and daily incremental level 1 backups.
The database is running in noarchivelog mode. The online redo logs are multiplexed to a disk in the disaster recovery site. Also we have a recovery catalog on another server.
I want to test restoring the recent (yesterday) backup to database in disaster recovery site and then recover to just apply the online redo log files, how to achieve that?
side question: Is it sufficient to recover if we only have a yesterday backup and the online redo logs containing all transactions of today and none of them was overwritten? Since the database is in noarchivelog mode.
What is the use of archivelog mode if we have a daily backup and the redo logs are not overwritten during the day until the backup is taken?
what is the use of backing up archive logs?
You are working with a dangerous setup since you seem to be betting on redo log files that are never filled up between your backups. When your data has no value, go ahead, otherwise switch to archive log mode.
Archives are created when a redo log group fills up. So, in your case you need to copy the online redo log files manually to the remote site for recovery.
How sure are you about the redo log files not being overwritten?
Be sensible and if this is production switch to archive log mode. Otherwise, promise not te make promises about being able to make point in time recoveries.
An other point: if your online redo log files are damaged, your database has a big problem and in your case you might loose a day worth of work. Is that OK? If not, reduce the size of the redo log files to a limit where it does make a switch every now and then. I am sure your company has an idea about how much time they can accept loosing transactions from. Many companies allow less than one hour transaction loss.
I am using openshift 3 pro to mount an elasticsearch server (not ELK).
to do this I'am using this image :
-- https://github.com/lbischof/openshift3-elk
only the elasticsearch part.
After installing I am using elasticdump to add data from another server.
The process is very long and crashing muliples times. during the dumping, the pod is always using ALL the 512Mi Memory quota.
How to allow 1024 or 2048 Mi for my elasticsearch pod ?
You can change the resource quota by going to the deployment config in the web console and from the drop down menu on right side select 'Edit Resource Limits'. You will need to first ensure your Pro account has enough memory associated with it.
So, I have a Java app on Heroku that uses RedisCloud addon.
The addon clearly states that the free version comes with a maximum of 30 Connections:
The problem is that Im getting this error:
ERR max number of clients reached
So the first thing I did obviously was check the RedisCloud monitor and to my surprise, It establishes a limit of 10 Connections:
The question:
Why are we getting a connection limit of 10 on RedisCloud when the limit on the Heroku addon says it should be 30?
It appears that your add-on is using an old version of the plan from before we launched our Bigger and Imporved XXXL Free plan earlier this year.
The easiest way to resolve that is to use the Heroku toolkit belt and run the command:
heroku addons:upgrade rediscloud:30 -a <your app's name>
I am getting errors on my website and my website inodes count is overload. The hosting inodes limit is 200,000 but my website inodes count is 909,496 and I can't even open phpMyAdmin. The hosting support asked me to remove unused files. How can I decrease the inodes count and which files are unused in Magento based website?
Usually an indicator that you need a more capable hosting provider.
The major places that Magento creates files during operation are in the var/ folder and your product image cache.
If you've never checked before, the following areas can accumulate a phenomenal amount of detritus. Using an ftp client, check the following areas in your var/ folder:
Check that you don't have a bazillion sessions files in var/session, remove anything older than current date
Check that there aren't an excessive amount of files in var/report, you might want to find out why Magento is generating them and fix the issue. Delete them all.
Logging will generate over time several huge files in var/log, delete them and then look at the new ones to find out what errors are being generated.
Imports and other stuff can cause temporary files to accumulate in var/tmp, delete them. Also check in var/import for old imports that can be deleted
Stored database backups are kept in var/backup, using the admin backend System > Tools > Backups:
Download the latest database backups to a local workstation and delete all backups.
Magento uses a lot of caching to store information, the biggest will be the Image Cache if you have a large catalog, and it will contain cached images from the beginning of time, and lots of useless ones if you've deleted product over time. Using the Admin backend, go into System > Cache Management:
Clear the Magento Cache.
Flush Catalog Images Cache.
Magento does not delete product images when you delete product. In fact Magento would be a prime candidate for appearing on one of those Hoarder programs that were prevalent on TV there for a while.
After you get the site working, consider installing ImageClean.
Hopefully this will have reduced your inode count enough to carry out the following operations. Before proceeding, do a couple database backups and store off server!!!
Next step is to ask your hosting provider if they include your database in that inode table count. If they do, you are kind of stuck as Magento uses innodb and likely, they've cheaply not set up MySQL to use files-per-table so you can resize the innodb file size by optimizing each table. Ask them if they use files-per-table when they set up MySQL, if they don't know what it is, develop that sinking feeling in the pit of your stomach.
Some tables that get excessively huge, especially if you've haven't properly set set up the Magento master cron job trigger in your cPanel and checked to make sure log table cleaning is enabled in System > Configuration > Advanced > System > Log Cleaning. These tables are as follows:
'dataflow_batch_export',
'dataflow_batch_import',
'log_customer',
'log_quote',
'log_summary',
'log_summary_type',
'log_url',
'log_url_info',
'log_visitor',
'log_visitor_info',
'log_visitor_online',
'index_event',
'report_event',
'report_viewed_product_index',
'report_compared_product_index',
'catalog_compare_item',
'catalogindex_aggregation',
'catalogindex_aggregation_tag',
'catalogindex_aggregation_to_tag'
Magento has a built-in script to clean the logs. If running this crashes with a memory error because you've never set the cron job up and there's too much bloat to clean out, Crucial Web Host has a script that can be run to manually delete all log file contents. including the dataflow tables which won't be cleaned out by the Magento log cleaning process. If you use dataflow import/export a lot, Nexcess has a script that can check on the dataflow tables size and clear them as well.
After cleaning the database, you will need to use phpMyAdmin to optimize each table in your Magento database. If the hosting provider hasn't set up files-per-table in MySQL, it will do squat for reducing your inode count.
After all that, don't bother messing with deleting application files or anything else Magento uses. It doesn't really accumulate that much aside from the var/ folders and the Image cache and you likely will end up with a dead website.
At this point, you're at the mercy of a shared server hosting plan that has decided to be fair to everyone by limiting what can be done in each account and doesn't allow enough resources to run Magento. Start looking for a hosting provider that supports Magento, often they don't bother limiting your inode count (a cheap trick to allow too many people to share a hard drive) as they offer plenty of disk space for you to run your e-commerce website.
Folks,
I'm trying to set up a regular backup of a rather large production database (half a gig) that has both InnoDB and MyISAM tables. I've been using mysqldump so far, but I find that it's taking increasingly longer periods of time, and the server is completely unresponsive while mysqldump is running.
I wanted to ask for your advice: how do I either
Make mysqldump backup non-blocking - assign low priority to the process or something like that, OR
Find another backup mechanism that will be better/faster/non-blocking.
I know of the existence of MySQL Enterprise Backup product (http://www.mysql.com/products/enterprise/backup.html) - it's expensive and this is not an option for this project.
I've read about setting up a second server as a "replication slave", but that's not an option for me either (this requires hardware, which costs $$).
Thank you!
UPDATE: more info on my environment: Ubuntu, latest LAMPP, Amazon EC2.
If replication to a slave isn't an option, you could leverage the filesystem, depending on the OS you're using,
Consistent backup with Linux Logical Volume Manager (LVM) snapshots.
MySQL backups using ZFS snapshots.
The joys of backing up MySQL with ZFS...
I've used ZFS snapshots on a quite large MySQL database (30GB+) as a backup method and it completes very quickly (never more than a few minutes) and doesn't block. You can then mount the snapshot somewhere else and back it up to tape, etc.
Edit: (previous answer was suggestion a slave db to back up from, then I noticed Alex ruled that out in his question.)
There's no reason your replication slave can't run on the same hardware, assuming the hardware can keep up. Grab a source tarball, ./configure --prefix=/dbslave; make; make install; and you'll have a second mysql server living completely under /dbslave.
EDIT2: Replication has a bunch of other benefits, as well. For instance, with replication running, you'll may be able to recover the binlog and replay it on top your last backup to recover the extra data after certain kinds of catastrophes.
EDIT3: You mention you're running on EC2. Another, somewhat contrived idea to keep costs down is to try setting up another instance with an EBS volume. Then use the AWS api to spin this instance up long enough for it to catch up with writes from the binary log, dump/compress/send the snapshot, and then spin it down. Not free, and labor-intensive to set up, but considerably cheaper than running the instance 24x7.
Try mk-parallel-dump utility from maatkit (http://www.maatkit.org/)
regards,
Something you might consider is using binary logs here though a method called 'log shipping'. Just before every backup, issue out a command to flush the binary logs and then you can copy all except the current binary log out via your regular file system operations.
The advantage with this method is your not locking up the database at all, since when it opens up the next binary log in sequence, it releases all the file locks on the prior logs so processing shouldn't be affected then. Tar'em, zip'em in place, do as you please, then copy it out as one file to your backup system.
An another advantage with using binary logs is you can restore up to X point in time if the logs are available. I.e. You have last year's full backup, and every log from then to now. But you want to see what the database was on Jan 1st, 2011. You can issue a restore 'until 2011-01-01' and when it stops, your at Jan 1st, 2011 as far as the database is concerned.
I've had to use this once to reverse the damage a hacker caused.
It is definately worth checking out.
Please note... binary logs are USUALLY used for replication. Nothing says you HAVE to.
Adding to what Rich Adams and timdev have already suggested, write a cron job which gets triggered on low usage period to perform the slaving task as suggested to avoid high CPU utilization.
Check mysql-parallel-dump also.