Env: Oracle 11g DB with a Java based application
We are looking to encrypt data in our database, for a few sensitive columns of a table.
We would like these columns to be decrypted and visible to a set of users A.
And we DO NOT want these encrypted columns to be visible to another set of users B.
But, this user set B should be able to see the rest of the non-encrypted columns of the table.
From various articles and posts, I understand TDE does encryption and decryption transperantly and at column level, but have not been able to find clear information if the above user/role based encryption, at a column level granularity is possible or not.
Can we achieve the above using TDE?
I'm not a DBA, but from my understanding of TDE the encryption is not noticeable when viewed from any query. It only encrypts the data in the disk data file so it can't be read if dumped directly from the file.
A good DBA may have a better answer but just off the cuff, here is what I would suggest.
Have two fields for the sensitive data. One is clear (though TDE may be a good idea) and the other is obfuscated in some way. These fields may be normalized into a separate table. Don't allow access directly to the table but use a view instead. The view would be defined like:
create view TableName as
select ...,
case ROLE when 'A' then clear_field else obfuscated_field end as FieldName,
...
from SensitiveTable
join PossibleNormalizedTable on ... ;
You would also need triggers on the view. If only A can clearly see that field, probably only A can insert and update it.
Related
I have an Oracle database and need to periodically access some data from the v$database view in my SpringBoot app. v$database is a special view that has just one row and contains some internal DB constants and variables. Basically I would like to fetch CHECKPOINT_CHANGE# from that one row via a query such as:
SELECT CHECKPOINT_CHANGE# FROM v$database;
Normally, you would use the Spring Data Repository abstraction along with #Query for your usual tables/views and I'm sure that would work here, as well. But considering the fact that v$database is a bit "special" and only has one row, it seems a bit over the head.
How would you approach this?
There is nothing special about it as far as the user is concerned. It queries like any other table. Under the covers, it is an interface to the control file rather than a data segment, but that is only of interest on the back-end. Just don't try to ask for ROWID. And you will need the "SELECT ANY DICTIONARY" privilege to query it, as with all SYS-owned dictionary views.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
Okay a synonym table means it point to another database, its now owned from the current database, it just a mirror, even its case sensetive ,if I grant select I can select from the other database, I understand that. but why I cannot analyse statistics for this synonym. I need some technical explanation or an official document
You must have a look at Oracle Doc which clearly spefices that as a Prerequisites to Analyze any object :
The schema object to be analyzed must be local, and it must be in your
own schema or you must have the ANALYZE ANY system privilege.
Also you must read these:
Restrictions on Analyzing tables is subject to the following restrictions:
•You cannot use ANALYZE to collect statistics on data dictionary tables.
•You cannot use ANALYZE to collect statistics on an external table. Instead, you must use the DBMS_STATS package.
•You cannot use ANALYZE to collect default statistics on a temporary table. However, if you have already created an association between one or more columns of a temporary table and a user-defined statistics type, then you can use ANALYZE to collect the user-defined statistics on the temporary table.
•You cannot compute or estimate statistics for the following column types: REF column types, varrays, nested tables, LOB column types (LOB column types are not analyzed, they are skipped), LONG column types, or object types. However, if a statistics type is associated with such a column, then Oracle Database collects user-defined statistics.
I doubt you will find a document that enumerates all the things you cannot do. The doc for DBMS_STATS.GATHER_TABLE_STATS() is quite clear in that you specify a schema name and a table.
However, I see see if I can find a technical reason why this may be disallowed.
There is no technical reason why we don't allow the collection of stats via a synonym. We just assume (since that's how it's specified), that the object is a table.
If I understand you correctly, by "synonym" you are referring to a table in another database that is accessed via a database link. If that is case, you cannot analyze remote objects over a database link (at least not as of 11g).
From the horses mouth:
You cannot perform the following operations using database links:
Grant privileges on remote objects
Execute DESCRIBE operations on some remote objects. The following
remote objects, however, do support DESCRIBE operations:
Tables Views Procedures Functions
Analyze remote objects
Define or enforce referential integrity
Grant roles to users in a remote database
Obtain nondefault roles on a remote database. For example, if jane
connects to the local database and executes a stored procedure that
uses a fixed user link connecting as scott, jane receives scott's
default roles on the remote database. Jane cannot issue SET ROLE to
obtain a nondefault role.
Execute hash query joins that use shared server connections
Use a current user link without authentication through SSL, password,
or NT native authentication
What are the ways in which data can be encrypted? Say for example salary column, even the admin should not be able to see the encrypted columns if possible, data should be visible only through application to users who have access which is defined in the application, changes in application (adding new functionality to encrypt/decrypt at application level) would be a last resort and minimal.
So far I have thought of 2 ways any fresh ideas or pros and cons of the ones below would be much appreciated:
1. Using Oracle TDE (transparent data encryption).
- Con : Admin can possibly grant himself rights to see the data
2. Creating a trigger to encrypt before insert and something along the lines of a pipeline to retrieve.
Oracle Database Vault is the only way to prevent a DBA from being able to access data stored in the database. That is an extra cost product, however, and it requires you to have an additional set of security admins whose job it is to grant the DBAs whatever privileges they actually need.
Barring that, you'd be looking at solutions that encrypt and decrypt the data in the application outside the database. That would involve making changes to the database structure (i.e. the salary column would be declared as a raw rather than a number). And it involves application changes to call the encryption and decryption routines. And that requires that you solve the key management problem which is generally where these sorts of solutions fail. Storing the encryption key somewhere that the application can retrieve it but somewhere that no admin can access is generally non-trivial. And then you need to ensure that the key is backed up and restored separately since the encrypted data in the database is useless without the key.
Most of the time, though, I'd tend to suggest that the right approach is to allow the DBA to see the data and audit the queries they run instead. If you see that one particular DBA is running queries for fun rather than occasionally looking at bits of data in the course of doing her job, you can take action at that point. Knowing that their queries are being audited is generally enough to keep the DBA from accessing data that she doesn't really need.
I'm assuming that it CAN be done, I just don't know how or haven't been able to find a way to do it.
In Oracle, can access to the ALL_* tables and USER_* tables be revoked, including select grants.
For example, the all_objects, all_procedures views. Can access to these be restricted?
The same goes for user_tables and user_procedures and any other user_* views. Can access to them be restricted?
No, you can't (reasonably) prevent users from being able to query the ALL_* or USER_* views. You could go through and individually revoke access to each and every one of these views from PUBLIC but that would be a rather painful effort to go through, it would cause all sorts of applications to break not to mention breaking scripts from Oracle. You would end up, at a minimum, re-granting access to those views to any account that connected to the database because every database API interrogates the data dictionary views. Whether you have an ODBC application, a JDBC application, a PL/SQL IDE like TOAD or SQL Developer, or just about anything else, those applications will need to query the data dictionary.
The data that they will see in either view, however, will be limited to the objects that they have access to (for the ALL_* views) or the objects that they own (for the USER_* views). What purpose would be served by restricting a user's ability to query the data dictionary to determine what objects the user owns or what objects the user has access to? It would seem extremely odd to want a user to own a table but then to not allow the user to know that he owned that table.
Now, if you are really determined, you can create objects in the user's schema (tables or views) named, say, ALL_TABLES or USER_TABLES. Assuming that the user just queries ALL_TABLES rather than specifying a fully qualified SYS.ALL_TABLES, they will be querying the local object not the data dictionary view. That is generally a very inadvisable thing to do-- lots of products work by querying the data dictionary so causing the data dictionary queries to return the wrong set of data can lead to all sorts of bugs that are terribly hard to track down. But it is an option if you really, really want to restrict what data is returned from the data dictionary.
We have a need coming up in an application where the following is true:
A web page uses AJAX to request data from a server.
The specification of the data (e. g. table name) requested from the server will not be known until run-time.
The configuration of the data view is itself data-driven, and configurable by an administrator.
Data updates and inserts must be supported, not just views.
Prototyping this was very easy - we could pass in the appropriate information (table name, changeset, whatever) to a generic data service that just did what it was told (using JSON as the data storage mechanism). The data service could do basic validation on the parameters to ensure the current user can perform the requested operation (read the data, insert a row, read the row).
The issue we have now that we are looking to doing this is a secure production manner, and the idea of passing table names and column names is frightening. Everything we think of to deal with this devolves into trusting the client in some significant way, or seems to involve substantial bookkeeping on the server. For example:
User requests a viewing page.
The server notes the table name and saves it server side with a request ID
The server notes the column names and saves them, replacing them with "col1, col2", etc., and stores the mapping with the request ID data.
The client page sends the request ID to the service, which looks up the server storage by ID
The service returns col1, col2, etc.
This would work, we think, but feels very messy.
Does anyone have experience with this kind of problem and can offer a solution?
Do you need to give them access to raw tables?
Perhaps you can go meta, and make a meta-table that stores the tabular data in a secure manner (ie, only the system knows the table/schema, but the user's concept of schema/table are just abstractions that all map back to the same schema/table)...
Again, more information is needed as to what can be abstracted. Allowing DDL operations by the end-users is asking for trouble, as you rightfully assessed, and I would just abstract that so that "DDL" becomes DML.
However, mapping actual SQL that is written against this data would be much more difficult to abstract, if that is a requirement.
If I had to expose back-end information to end customers, I'd probably hide the actual physical representation using meta-data that would remap table names and columns to more user-friendly text, that would also enable me to provide views on the tables that are a bit more advanced than plain table / column names... As properly modeling associations between tables and so on...