Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
Okay a synonym table means it point to another database, its now owned from the current database, it just a mirror, even its case sensetive ,if I grant select I can select from the other database, I understand that. but why I cannot analyse statistics for this synonym. I need some technical explanation or an official document
You must have a look at Oracle Doc which clearly spefices that as a Prerequisites to Analyze any object :
The schema object to be analyzed must be local, and it must be in your
own schema or you must have the ANALYZE ANY system privilege.
Also you must read these:
Restrictions on Analyzing tables is subject to the following restrictions:
•You cannot use ANALYZE to collect statistics on data dictionary tables.
•You cannot use ANALYZE to collect statistics on an external table. Instead, you must use the DBMS_STATS package.
•You cannot use ANALYZE to collect default statistics on a temporary table. However, if you have already created an association between one or more columns of a temporary table and a user-defined statistics type, then you can use ANALYZE to collect the user-defined statistics on the temporary table.
•You cannot compute or estimate statistics for the following column types: REF column types, varrays, nested tables, LOB column types (LOB column types are not analyzed, they are skipped), LONG column types, or object types. However, if a statistics type is associated with such a column, then Oracle Database collects user-defined statistics.
I doubt you will find a document that enumerates all the things you cannot do. The doc for DBMS_STATS.GATHER_TABLE_STATS() is quite clear in that you specify a schema name and a table.
However, I see see if I can find a technical reason why this may be disallowed.
There is no technical reason why we don't allow the collection of stats via a synonym. We just assume (since that's how it's specified), that the object is a table.
If I understand you correctly, by "synonym" you are referring to a table in another database that is accessed via a database link. If that is case, you cannot analyze remote objects over a database link (at least not as of 11g).
From the horses mouth:
You cannot perform the following operations using database links:
Grant privileges on remote objects
Execute DESCRIBE operations on some remote objects. The following
remote objects, however, do support DESCRIBE operations:
Tables Views Procedures Functions
Analyze remote objects
Define or enforce referential integrity
Grant roles to users in a remote database
Obtain nondefault roles on a remote database. For example, if jane
connects to the local database and executes a stored procedure that
uses a fixed user link connecting as scott, jane receives scott's
default roles on the remote database. Jane cannot issue SET ROLE to
obtain a nondefault role.
Execute hash query joins that use shared server connections
Use a current user link without authentication through SSL, password,
or NT native authentication
Related
How do I know which are all the users have been accessing my table that is available in my schema.
EX: I have a table in oracle myschema.mytable with a public synonym to it. There are other users in the database.
I would like to know who are all other the users who have been accessing "mytable", other than "myschema"
Thanks,
The only sure-fire way to know for sure is to enable Database Auditing (Docs).
This would record every session that had selected or read data from HR.EMPLOYEES
AUDIT SELECTON "HR"."EMPLOYEES"
BY SESSION
WHENEVER SUCCESSFUL;
Once this rule is set, you can start checking your audit trails - reports of who is doing what in terms of audited events, in this case looking at data in HR.EMPLOYEES.
You can simply query the DBA_AUDIT_OBJECT view.
Note that this feature does come with a cost - it increases the amount of work required of the database. Every session that looks at the data in EMPLOYEES, Oracle will have to record the entry in this trail.
If you want more granular, you can record activity by occurrence instead of by session. That will cost, even more.
Many people have built their own auditing systems with TRIGGERS, but all of them have drawbacks - mainly that you have to build and maintain the system.
I've only ever seen 100% complete auditing systems successful using this built-in feature. You just have to be prepared for the potential performance hit, and decide how often you want to clean up the audit trails.
And yes, SQL Developer has interface for the database auditing feature.
Env: Oracle 11g DB with a Java based application
We are looking to encrypt data in our database, for a few sensitive columns of a table.
We would like these columns to be decrypted and visible to a set of users A.
And we DO NOT want these encrypted columns to be visible to another set of users B.
But, this user set B should be able to see the rest of the non-encrypted columns of the table.
From various articles and posts, I understand TDE does encryption and decryption transperantly and at column level, but have not been able to find clear information if the above user/role based encryption, at a column level granularity is possible or not.
Can we achieve the above using TDE?
I'm not a DBA, but from my understanding of TDE the encryption is not noticeable when viewed from any query. It only encrypts the data in the disk data file so it can't be read if dumped directly from the file.
A good DBA may have a better answer but just off the cuff, here is what I would suggest.
Have two fields for the sensitive data. One is clear (though TDE may be a good idea) and the other is obfuscated in some way. These fields may be normalized into a separate table. Don't allow access directly to the table but use a view instead. The view would be defined like:
create view TableName as
select ...,
case ROLE when 'A' then clear_field else obfuscated_field end as FieldName,
...
from SensitiveTable
join PossibleNormalizedTable on ... ;
You would also need triggers on the view. If only A can clearly see that field, probably only A can insert and update it.
We have an application that generates some temporary tables and then processes the data. I dont really have control of the way the application creates this and the subsequent queries involved. What we have noticed is that Oracle uses a full table scan instead of using the index which is the primary key of the tables. If it used the primary key index the process would run a whole lot faster.
Since I do not have control over the select queries generated by the application I cannot use hints and force Oracle to use primary key index. Is there any other setting I could change somewhere that could force Oracle to use primary key index for the temporary tables?
The two most common reasons for a query not using indexes are:
It's quicker to do a full table scan.
Poor statistics.
If your queries are selecting all of the table or doing joins without mentioning a primary key in the where clause etc., chances are it's quicker to do a full scan. Without the query and indexes, and preferably an explain plan as well it's impossible to tell for certain.
I would, however, recommend that you ask your DBA to re-gather - I hope, if not gather for the first time - statistics on the table. Use dbms_stats.gather_table_stats, with an estimate percentage of 25%+.
If the tables are re-created each time the application is run then try and gather statistics after creation and primary key generation. If they are truncated and re-filled each time, then ask your DBA to rebuild them and the PK and then gather statistics as this could significantly increase query runtime.
With no control over anything I don't see how you can improve the query time any other way.
You can use hints without changing SQL by leveraging SQL Profiles. Wrap your hint(s) into a SQL Profile that takes effect for that particular SQL ID.
I understand you don't have control over SQL, I have many apps where I encounter the same restriction. After checking query structure and statistics as in Ben's post and you have proved that hinting to use the index will improve performance why not try a manually created SQL profile.
Christian Antognini has a great paper here about SQL Profiles and creating them manually. The paper mentions creating SQL Profiles manually is undocumented. I would agree undocumented, but that doesn't necessarily mean unsupported. I would say there is little documentation out there, but if you want proof that Oracle allows manual creation, check the API or look at the coe_xfr_sql_profile.sql file in the SQLT utility directory.
I also posted a cheatsheet on how to quickly manually create a SQL Profile here.
Ok the question is obviously wrong as it stands, but I'm wondering how can I choose storage implementations on Oracle as I would for MySQL, say I want one table to MyIsam like and another for Archiving only and one Black Hole style for test purposes. How would I go around to doing this within a single Schema, or something similar that would meet these needs?
Oracle does not have a storage engine concept like Mysql does. It stores all tables in its own format in datafiles. What you can do is use different tablespaces and store them on different disks whose performance characteristics may be different.
The concepts guide may help you understand how Oracle works.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/toc.htm
You may use ORGANIZATION EXTERNAL:
CREATE TABLE ORGANIZATION EXTERNAL
and select an access driver to use with it.
As for now, Oracle has ORACLE_LOADER to access CSV and like text tables (read-only), and ORACLE_DATAPUMP to read and write binary data (in custom format).
When you create a Microsoft Access 2003 link to an Oracle table using Oracle's ODBC driver, you are sometimes asked to state which columns are the primary key(s).
I would like to know how to change that initial assignment, or even how to get Access/ODBC to forget the assignment. In my limited testing I wonder if the assignment isn't cached by the ODBC driver itself.
The columns I initial chose are not correct.
Update: I never did get a full answer on this one, deleting the links then restoring them didn't work. I think it's an obscure bug. I've moved on and haven't had to worry about this oddity since.
You must delete the link to the table and create a new one. When a table is linked all the connection info about the table's path, structure (including primary key), permissions, passwords and statistics are stored in the Access db. If any of those items change in the linked table, refreshing links won't automatically update it on the Access side because Access continues to use the previously stored info. You must delete or drop the linked table and recreate the link, storing the current connection information.
Don't know for sure if this next bit also applies to odbc linked tables, but I suspect it does. For Jet tables, it's a good idea to periodically delete all links and recreate them to improve query performance, because if a linked table's statistics are made on a table with few records, once that table is filled with many more records, new statistics will tell Jet's optimizer whether using indexes or a full table scan would be the better course of action when running a query.
It is not possible to delete the link and then relink?