using materialised views to fix bugs and reduce code [closed] - oracle

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
The application I'm working on has a legacy problem where 2 tables were created ADULT and CHILD in an oracle 11g dB.
This has led to a number of related tables that have both a field for ADULT and CHILD no FK applied.
The bugs have arisen where poor development has mapped relationships to the wrong field.
Our technical architect plans to merge the ADULT and CHILD tables in to a new ADULT_CHILD table and create materialised views in place of the tables. The plan is to also create a new id value and replace the I'd values in all associated tables so even if the plsql/apex code maps to the wrong field the data mapping will still be correct.
The reasoning behind this solution it it does not require that we change any other code.
My opinion is this is a fudge but I'm more a Java/.NET OO.
What arguments can I use to convince the architect this is wrong and not a real solution. I'm concerned we are creating a more complex solution and performance will be an issue.
Thanks for any pointers

While it may be a needed solution it might also create new issues. If you really do need to use an MV that is up to date at all times, you need on commit refresh and that in turn tends to make all updates sequential. Meaning that all processes writing to it waits in line for the one updating the table to commit. Not, the table, not the row.
So it is prudent to test the approach with realistic loads. Why does it have to become a single table? Could they not stay separate, add a FK? If you need more control on the updates, rename them and put views with instead-of triggers in their place.

Related

Need simple definition for following [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am looking for single line definition to understand concepts or terms like below.
I refered many sites even oracle.docs. But cant able to understand those concepts and map with real time scenerio..please help me to understand.
Thanks in advance.
1)Normalization and its forms
2)Table level locking and how to resolve it
3)Dead locking and how to resolve it
4)Cube and Rollup
5)Table partition
1) Normalization and its forms
Normalization is a database design technique which organizes tables in
a manner that reduces redundancy and dependency of data. It divides
larger tables to smaller tables and links them using relationships.
There are several types of normalization form:
1NF (First Normal Form)
2NF (Second Normal Form)
3NF (Third Normal Form)
Boyce-Codd Normal Form (BCNF)
4NF (Fourth Normal Form)
5NF (Fifth Normal Form)
Refer this document for more information on normalization
2) Table level locking and how to resolve it
According to Oracle documentation, A transaction automatically
acquires a table lock (TM lock) when a table is modified with the
following statements: INSERT, UPDATE, DELETE, MERGE, and SELECT ...
FOR UPDATE. These DML operations require table locks to reserve DML
access to the table on behalf of a transaction and to prevent DDL
operations that would conflict with the transaction.
Refer this document for more information on Table level locking .
3) Dead locking and how to resolve it
A deadlock occurs when two or more sessions are waiting for data
locked by each other, resulting in all the sessions being blocked.
Oracle automatically detects and resolves deadlocks by rolling back
the statement associated with the transaction that detects the
deadlock.
Refer this document for more information on Deadlock.
4) Cube and Rollup
ROLLUP :
In addition to the regular aggregation results we expect from the
GROUP BY clause, the ROLLUP extension produces group subtotals from
right to left and a grand total.
CUBE:
In addition to the subtotals generated by the ROLLUP extension, the
CUBE extension will generate subtotals for all combinations of the
dimensions specified.
Refer this document for more information on ROLLUP and CUBE.
5) Table partition
Partitioning allows a table, index, or index-organized table to be
subdivided into smaller pieces, where each piece of such a database
object is called a partition. Each partition has its own name, and may
optionally have its own storage characteristics.
There are several types of partitioning:
- Range Partitioning Tables
- Hash Partitioning Tables
- Composite Partitioning Tables
Refer this document for more information on ROLLUP and CUBE.
Cheers!!

Unit testing of PL/SQL [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I would like to ask if you write/make any unit tests on your database and if yes what are yours experiences?
Do the tests are worth the effort? You test only high-level procedure or also functions? What are the best practises?
Testing best practices for PL/SQL or any DB for that matter:
Software 101- The earlier you catch a bug, less expensive it is to fix. By that adage every code going into production should be tested and PL/SQL is no exception. Testing is always worth the effort - no ambiguities there
Database testing should be done at two levels - for the data and about the data
For the data - this includes metrics of data loaded and the process- eg - define sample data set and calculate how much expected counts will be in target tables after the test case is executed.
Secondly Performance test cases - this test the process eg - if you load full production set, how long that takes. Again you don't want to uncover performance issue in production
About the data - this is more business testing, whether the data loaded is as per expected functionality - eg - if if you are aggregating sales rep to their parent companies, is the one to many relationship between company and sales rep valid after you run the test case.
Always create a test query which results in a number, eg - select count of sales rep which are not associated to any company. if the count > 0 then it is a failure
It's a good idea to put test cases, their results, test query and actual result in a table so that you can review them and slice and dice if required.
You can write a SP to automate running the test query from the table and this be repeated very easily and even can be embedded in a batch or a GUI screen

Create and Drop table everyday [duplicate]

This question already has answers here:
Creating tables dynamically at runtime
(5 answers)
Closed 8 years ago.
My Customer want to create new table everyday, e.g. MyTable<yyyymmdd>, and then my application will read this table and do some cross-check. After done all cross-check, I need to drop this table. Every day will be create and drop table.
Is it good solution? could be database will crash because of this?
please advice me.
i already give another solution, which customer just insert the data into one table. After that, I'll delete all records once done. But customer refuse to use my solution.
this is a simple life cycle issue. use interval partitioning. easy to manage, good for performance.
There is no point in creating and dropping the table with the same columns everyday.
If the columns in that table are changing everyday you can go ahead with the approach.
Your database will not crash because of this. Just make sure that services are not accessing the table when it is dropped.

Performance implications of using (DBMS_RLS) Oracle Row Level Security(RLS)? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
If we use Oracle Row Level Security(RLS) to hide some records - Are there any Performance Implications - will it slow down my SQL Queries? The Oracle Package for this is: DBMS_RLS.
I plan to add: IS_HISTORICAL=T/F to some tables. And then using RLS, hide the records which have value of IS_HISTORICAL=T.
The SQL Queries we use in application are quite complex, with inner/outer joins, subqueries, correlated subqueries etc.
Of the 200 odd tables, about 50 of them will have this RLS Policy (to hide records by IS_HISTORICAL=T) applied on them. Rest of the 150 tables are child tables of these 50 Tables, so RLS is implicit on them.
Any License implications?
Thanks.
"Are there any Performance Implications - will it slow down my SQL
Queries? "
As with all questions relating to performance the answer is, "it depends". RLS works by wrapping the controlled query in an outer query which applies the policy function as a WHERE clause...
select /*+ rls query */ * from (
select /*+ your query */ ... from t23
where whatever = 42 )
where rls_policy.function_t23 = 'true'
So the performance implications rest entirely on what goes in the function.
The normal way of doing these things is to use context namespaces. These are predefined areas of session memory accessed through the SYS_CONTEXT() function. As such the cost of retrieving a stored value from a context is negligible. And as we would normally populate the namespaces once per session - say by an after logon trigger or a similar connection hook - the overall cost per query is trivial. There are different ways of refreshing the namespace which might have performance implications but again these are trivial in the overall schem of things (see this other answer).
So the performance impact depends on what your function actually does. Which brings us to a consideration of your actual policy:
"this RLS Policy (to hide records by IS_HISTORICAL=T)"
The good news is the execution of such a function is unlikely to be costly in itself. The bad news is the performance may still be Teh Suck! anyway, if the ratio of live records to historical records is unfavourable. You will probably end up retrieving all the records and then filtering out the historical ones. The optimizer might push the RLS predicate into the main query but I think it's unlikely because of the way RLS works: it avoids revealing the criteria of the policy to the general gaze (which makes debugging RLS operations a real PITN).
Your users will pay the price of your poor design decision. It is much better to have journalling or history tables to store old records and keep only live data in the real tables. Retaining historical records alongside live ones is rarely a solution which scales.
"Any License implications?"
DBMS_RLS requires an Enterprise Edition license.

Why oracle does not have autoincrement feature for primary keys? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Can someone enlighten on why is that oracle does not support an autoincrement feature for primary keys?
I know the same feature can be achieved with the help of sequence and triggers, but why oracle didn't introduce the autoincrement keyword which will internally create a sequence and a trigger. I bet guys in oracle would have definitely thought about this. There must be some reason for not giving this feature. Any thoughts?
It may just be terminology.
'AUTOINCREMENT' implies that that record '103' will get created between records '102' and '104'. In clustered environments, that isn't necessarily the case for sequences. One node may insert '100','101','102' while the other node is inserting '110','111','112', so the records are 'out of order'. [Of course, the term 'sequence' has the same implication.]
If you choose not to follow the sequence model, then you introduce locking and serialization issues. Do you force an insert to wait for the commit/rollback of another insert before determining what the next value is, or do you accept that, if a transaction rolls back, you get gaps in the keys.
Then there's the issue about what you do if someone wants to insert a row into the table with a specific value for that field (ie is it allowed, or does it work like a DEFAULT) or if someone tries to update it. If someone inserts '101', does the autoincrement 'jump' to '102' or do you risk attempted duplicate values.
It can have implications for their IMP utilities and direct path writes and backwards compatibility.
I'm not saying it couldn't be done. But I suspect in the end someone has looked at it and decided that they can spend the development time better elsewhere.
Edit to add:
In Oracle 12.1, support for an IDENTITY column was added.
"The identity column will be assigned an increasing or decreasing integer value from a sequence generator for each subsequent INSERT statement. You can use the identity_options clause to configure the sequence generator."
https://docs.oracle.com/database/121/SQLRF/statements_7002.htm#CJAHJHJC
This has been a bone of contention for quite some time between the various DB camps. For a database system as polished and well-built as Oracle, it still stuns me that it requires so much code and effort to enable this commonly-used and valuable feature.
I recommend just putting some kind of incremental-primary-key builder/function/tool in your toolkit and have it handy for Oracle work. And write your congressman and tell him how bad they need to make this feature available from the GUI or using a single line of SQL!
Because it has sequences, which can do everything autoincrement does, and then some.
Many have complained of this, but the answer generally is that you can create one easily enough with a sequence and a trigger.
Sequences can get out of sync easily (someone inserts a record manually in the database without updating the sequence). Oracle should have implemeted this ages ago!
Sequences are easy to use but not as easy as autoincrement (they require extra bit of coding).

Resources