How to temporarily stop activity on a Heroku server to prevent getting billed - heroku

I have a Heroku server with about $250.00 worth of monthly addons (due to upgrades Heroku Postgres and Heroku Redis). I'm no longer using the server for the foreseeable future, but would like to be able to boot the server back up at a later date with the same configuration.
Is there a way to temporarily halt all server functionality to prevent myself from getting billed, with the possibility of rebooting the server at a later date?

Well, you can step down the dynos to hobby-dev tier if you've less than 2 process types. Or you can simply shut them down. Just go to https://dashboard.heroku.com/, click on your app and then go to the 'resources' tab to control the dynos.
Stepping down heroku-redis should be easy too. It's anyway temporary storage, that you can restart/scale up later. Also see this
The only sticking point might be your Postgres DB. If it has more than 10,000 rows, you'll have to pay atleast $9 per month, and if you've more than 1Mn rows in the DB, you'll have to pay atleast $50 per month. Many times DBs collect a lot of logs data. You can consider cleaning and compacting the data if that's possible. Or you can take a local Database dump and decommission the DB and when you decide to start the app again upload the DB (this is a bit of an extreme step though, so be doubly sure that you've everything backup up.)

Related

Polling database after every 'n' seconds vs CQN Continuous Query Notification - Oracle

My application currently polls database every n seconds to see if there are any new records.
To reduce network round trips, and CPU cycles of this polling i was thinking to replace it with CQN based approach where database will itself update subscribed application if there is any Commit to database.
The only problem is what if Oracle was NOT able to notify application due to any connection issue between oracle and subscribed application or if the application was crashed or killed due to any reason? ... Is there a way to know if application have missed any CQN notification?
Is polling database via application code itself the only way for mission critical applications?
You didn't say whether every 'n' seconds means you're expecting data every few seconds, or you just need your "staleness" to as low as that. That has an impact on the choice of CQN, because as per docs, https://docs.oracle.com/en/database/oracle/oracle-database/12.2/adfns/cqn.html#GUID-98FB4276-0827-4A50-9506-E5C1CA0B7778
"Good candidates for CQN are applications that cache the result sets of queries on infrequently changed objects in the middle tier, to avoid network round trips to the database. These applications can use CQN to register the queries to be cached. When such an application receives a notification, it can refresh its cache by rerunning the registered queries"
However, you have control over how persistent you want the notifcations to be:
"Reliable Option:
By default, a CQN registration is stored in shared memory. To store it in a persistent database queue instead—that is, to generate reliable notifications—specify QOS_RELIABLE in the QOSFLAGS attribute of the CQ_NOTIFICATION$_REG_INFO object.
The advantage of reliable notifications is that if the database fails after generating them, it can still deliver them after it restarts. In an Oracle RAC environment, a surviving database instance can deliver them.
The disadvantage of reliable notifications is that they have higher CPU and I/O costs than default notifications do."

Azure Logic Apps interferes with SQL Server operations - causes time-outs in node-red which inserts messages from IoT device into SQL DB

I have a database in Azure SQL Server, which stores messages from IoT devices. Those devices send periodic messages to a listener which is set up (along with a lot of logic before it ends up in SQL DB) and runs in node-red. Everything works well for a couple of weeks now.
Enter Logic Apps. I have a simple trigger that executes two stored procedures on a schedule (one issues major update on one table that is also used in trigger (on a table in SQL) firing when node-red inserts data, second inserts data into a table feeding PowerBI dashboard. Total runtime for two executions is less than 20 seconds.
The moment this trigger is enabled, node-red starts experiencing time-outs when connecting to the server. And I am literal: for 3 days no issues, enabled Logic Apps trigger and I can see incomplete data saved in DB. It doesn't even have to run, just enabled causes issues.
It was disabled last week due to same issues, though we didn't know then this was the cause. We assumed neighbors were crowding us and tier was low on that db. But we forked out some cash and upgraded. Yet problem is back. And I can definitely state this is the ONLY change that was made on Azure in regards to that setup in 3 days.
I am kinda stumped - not sure what is happening, so not sure how to fix it except disabling trigger again. What is going on?

Developer copy of Oracle DB

Here is the problem - I have to use remote db for few hours a day. And the VPN we use (for unknown reason) drops Oracle connection several times an hour which is really annoying and time consuming..
The sysadmin who manages both the Sonic VPN and the DB cant help..
So I am thinking to place a db copy locally.
What I need/don't need:
the all changes on the remote db (the master) should propagate quite easily to the copy (auto or manually - I don't mind as soon as it a one button push). they are rare - once a day at most
my changes to the local db should not be propagated to the master (but I am flexible here)
I don't have to spend more than 5 min a day to maintain this
it would be nice to replicate only DDL from master (I don't need the actual data changes, only tables changes)
is there a sort of replication or any other solution I can use to achieve this?
Database Replication isn't cheap. Your company will pay more to build replication environment , starting from the oracle edition and license and many extra.
Replication will increase the complexity of the database administration.
Finally, the More important point ,Database replication work in your VPN environment :) (which is disconnected all the time ) and replication will fail all the time.
You can with network team:
Review the service level agreement (SLA) contract of VPN with the
service provider to know the percentage of time down and the Quality of service.
Network administrator monitor network to spot where is the problem-may be line /router/network configuration/network card.
Do some measures: what the size of your transaction per minute (in bytes) to select the best speed from the network service provider.
Measuring Network Bandwidth Using iperf , for ref: https://blogs.oracle.com/mandalika/entry/measuring_network_bandwidth_using_iperf
Perform a Network Performance Test
if the changes are once a day your best and easiest solution would be to do a full backup of master db then zip it ftp/email and unzip + restore on your end. But this wont be feasible if the db size is too large.

Meteor 100% uptime considering sticky sessions

I've been working with Meteor for some time and I'm considering using it for multiple large scale projects. I love Meteor and I really want to push it's adoption in our company but I have 1 last reservation before I do so. Sticky sessions and what it means for 100% uptime.
My requirement is 100% uptime for all of our sites. Hot code pushes obviously solve the problem of pushing new features/update/bug fixes. However, if a server needs to be taken down for maintenance, then all my active users are going to lose their sessions (something I can't let happen).
I was hoping someone may have some insights into the problem and what they've done to overcome it or if there's a possible strategy for migrating users from one server to another (session replication) thus preventing users from being kicked.
The reason I ask is because the publish cursor keeps track of whatever collections the client may have so if the server disconnects and the client connection is directed to another server (because it's behind a load balancer), that server will not have any idea of what is out of sync on the client and create strange behaviour.

How does Heroku dyno caching works with Play framework

I have a Play application hosted by Heroku with 5 dynos. Its seems like my dynos were been restarted randomly in different time period. For example, three of them were restarted by itself 22 hours ago, and two of them were restarted 10 hours ago (not sure if this time was triggered by clearing the cache). It seems like that cached data isn't persistent between dynos. My problem is when I sent same request to my Heroku application multiple times, I get different cached response, in the response, some are most up to date data, others were old data. I assume this is because my request was processed by different dyno. After restart all my dyno fixed the problem(I assume this also clear cache in all dynos).
So I want to know what triggered the random dyno restart, and why it does that?
How to solve cached data inconsistency in this case?
Thanks
I think you should use mutualised cache in order to avoid this kind of problem when you scale horizontally.
Couchbase is a good solution to do it. We use this internally at Clever Cloud (http://www.clever-cloud.com/en/), that is the reason why we released a Couchbase as a service.
As for dyno restarts, did you try the documentation? Dyno's are cycled at least once per day

Resources