Yesterday, I made some migrations to a website that I had to rollback. Luckily, I had a backup of the database, and was able to restore the lead database to a "good" state using Heroku's pg:backup:restore facility.
The lead database is followed by another database. Does the follower also get "restored" when I restore the lead? Will it contain the same data as the leader?
You can't rollback an existing database. When you use the rollback functionality you're actually forking the targeted database and thereby creating an entirely new database without any followers. If you need to do this operation for your primary database, you'll need to put the application maintenance mode before creating the rollback database, promote it to primary and then recreate the any followers.
Related
I have setup a data guard on two separate servers (primary and standby).
All the steps have been completed and when I make a change in the primary database and commit, it is also applied to the standby server.
Now I want it to be OK without committing the changes on the standby server.
For example, if a record is inserted in the primary database table, that record will also be inserted in the standby database table and there is no need to commit.
I have not found a solution.
Lets put aside the standby for a second. If you make a change on a database and commit it, that the change is now there permanently. If you do not commit, it can be considered as never happened, ie, you rolled it back, or not yet happened (the transaction is open).
Having a standby or not does not impact this fundamental premise.
I have a usecase where I need to block access to all objects in a schema temporarily while I perform some massive changes. I plan to perform the changes as the schema owner. Once I am done I want to enable access back. I am currently exploring two options and would like to know your thoughts as to which one works better :
Lock all accounts that go against the database objects in target schema.
Revoke grants on the database objects and hence preventing external users from using it.
Is there are better way? I want the process to be as smooth as possible and insure that no one is able to get to the target schema while the change is going on
Trigger. This trigger works for everybody except the user with dba role.
CREATE OR REPLACE TRIGGER logon_audit_trigger
AFTER LOGON
ON DATABASE
DECLARE
BEGIN
raise_application_error (-20001, 'You cannot login.');
END;
If you want to know who and where is trying to login. You can get thses information from SYS_CONTEXT.
SELECT SYS_CONTEXT ('USERENV', 'SESSION_USER')
FROM DUAL;
You could consider to quiesce the database. The downsides to locking out users or revoking permissions is that users will receive errors (you don't have access or you can't login, etc...). Quiesceing the database means that active sessions will finish their work, but then will hang until the database is un-quiesced. Then, you perform your modifications and will be guaranteed that nothing can block your exclusive access to the objects you are updating. After your update (or even during your update after you have the lock on the object in question), unquiesce the database.
Of course, the downside to this approach is that this is across the entire database instead of to just one schema. The advantage to this is that your users won't experience any error messages, and if you turn your DML into DDL (as described below) to greatly speed up the downtime window, the majority of your users shouldn't experience much more than a few seconds of inactivity.
There is a good write up on quiesceing the database at Oracle FAQ. You would have to get your DBA's involved to both quiesce the database and to put your changes live as only system or sys can perform this operation.
For DML, you could consider creating a new table with the data that you want before the downtime window starts. Then when the downtime window starts, rename the old table, rename the new table as the old table, recreate the permissions, for a much faster downtime window (since this effectively turns a DML update into DDL). Tom Kyte has a discussion of this approach here.
Also, it goes without saying that proper testing in a testing environment of the above procedures should be done, which will iron out any gotchas in this process and give you a pretty good idea of how long the system will need to be quiesced for.
I have a JBoss 6 application running both EJB and Spring code (some legacy involved in this decision). It should communicate to Oracle and PostgreSQL databases, on demand.
JPA is the way DB operations are done, no direct JDBC is involved.
I would like to do the following: without altering the business logic, to be able to "silence" database updates/deletes from my application, without breaking the flow with any exceptions.
My current thoughts are:
Set the JDBC driver as read-only from the deployment descriptor - this works only with PostgreSQL (Oracle driver does not support this)
Make a read-only user on the RDBMS level - it might fill me up with errors
Make all transactions rollback instead of committing - is this possible?
Make entity manager never persist anything - set the FlushMode to MANUAL and make sure flush() never gets called - but commit() still flushes everything.
Is there any other concise approach to this?
If you want to make sure the application works as on production, work on a replica of the Database. Use a scheduler every night that overwrites the replica DB.
My request also includes the need for this behavior to be activated or deactivated at runtime.
The solution I found (currently for a proof-of-concept) is:
create a new user, grant him rights on the default schema's tables;
with this user create views for each of the tables, with the same name (without the schema prefix);
create a trigger for each view that does nothing on insert, update, or delete, using INSTEAD OF;
create a data source and persistence unit for this user;
inject two entity managers at runtime, use the one that is needed;
Thanks for your help!
I need to do a database migration from Oracle 11g to 12c. But I cannot do a direct export and import kind
of a migration since there are a lot of schema changes which are going to happen. I already have the column mappings
in a sparedsheet with old columns and new columns with all details such as data type, constraints, etc.
There are new columns added to many tables are the default values that should be populated are also known.
So what should be the best approach to do this migration?
There are more ways to do this. Start with getting a dba involved.
To minimize production downtime, you could check if making a logical standby database is feasible in your situation. In that case, make the target database a 12c one, that saves for upgrade time.This target database is in sync with the source database at all times and makes it very valuable. Clone the target database and use that clone to test the migration steps. If the migration fails, you can easily re create a new clone to correct the migration process on.
Working in this way could even enable bi-directional replication, replication from the migrated database back to the source database that could make it possible to revert to the original database in the unlikely event that after production start on the new database things don't work as expected.
Start with adding a dba to the project, a good dba can help minimize downtime and reduce risk.
Now that I have registered my sync services schema, how do I update it to my new model version?
I just found out from the docs..
¨ is highly recommended that you register the schema periodically even if it does not change—for example, register the schema each time your application launches. However, if a schema changes, update it with caution because changing a schema may cause records to be deleted and cause some clients to slow sync.¨
We need just to re-register the schema..