I am giving a presentation about cryptography. My teacher told me to include the advantages and disadvantages of TDE encryption and especially why you should use them instead of encrypting with C# for example. I couldn't find the real advantages of database encryption instead of encryption in a program.
Oracle Transparent Data Encryption specifically protects data at rest, when written into a datafile. It would not stop a database user with select privileges from seeing the data using SQL, and it allows the data to be used in all types of SQL constructs like joins and indexes.
Encrypting data in the application rather than the DB would prevent adhoc SQL queries outside of the app from decrypting the data, and would make it impossible to use SQL (in the database or in the app) to search the data, make table joins, indexes, or do anything at all with the encrypted data outside of the hard-coded application. Application-level encryption cause could also interfere with data compression algorithms in the database or the storage media.
Related
Trying to understand if there is any such concept like this in Oracle Database.
Let's say I have two Databases, Database_A & Database_B
Database_A has schema_A, is there a way I can attach this schema to Database_B?
What I mean by this is if there is a job populating a TABLE_A in schema_A, I can see that read-only view in Database_B. We are trying to split a big Oracle database into two smaller databases and have a vast PL/SQL code, and trying to minimize the refactoring here.
Sharding might be what you're looking for. The schemas and tables will still logically exist on all databases, but you can arrange the data to be physically stored in specific databases. There might be a way to setup shardspaces, tablespaces, and user default tablespaces in a way where each schema's data is automatically stored in a specific database.
But I haven't actually used sharding. From what I've read, it seems to be designed for massive distributed OLTP systems, and it is likely complicated to administer. I'd guess this feature isn't worth the hassle unless you have petabytes of data.
I happen to find myself in a situation where i am using Oracle SQL Developer Version 1.5.5 and there's this huge database for which the documentation is very poor. I'd like to create a star or a snowflake schema for better understanding of the data. Is there a simple way to do it?
You can reverse engineer the physical data model using SQL Developer Data Modeler. This is actually a separate tool from SQL Developer but shares some branding. It is also free.
The quality of the resultant diagram will depend heavily on how well the physical data structures have been implemented. You will only get relationships if the database has defined foreign key constraints (disabled is good enough). Likewise UIDs require defined primary key constraints. If your database lacks constraints you'll have to rely on column naming conventions, data analysis and your business knowledge.
Star or Snowflake schemas are for data warehouses. Is that the sort of database you're dealing with?
I'm designing a Data REST API for the purpose of dynamic reporting. Basically, you pass in the data to it (along with the functions to manipulate the data) and it returns you a HTML with the functions applied on the data. Typically, these functions would be filtering, grouping, aggregating, sorting (what a regular RDBMS would offer).
I'm contemplating as to use an in-memory DB for this. By doing so, I'll simply leverage the functions offered by the DB rather than having to implement myself.
However, this requires the service to load the data (perhaps bulk load) and then run series of queries constructed dynamically - as part of every service call.
The data (input) to be loaded in the database can be max 100K rows. Certainly not millions!
But the service can be accessed by different threads (each will load their data-set into the database and read concurrently). Of-course the jdbc connection will be pooled and the tables will be truncated at the end of every transaction.
I'm asking myself that am I going overboard and trying to exploit the in-memory dbs? I have used it myself (and often hear) about in-memory dbs (especially H2 and HSQL) only in the context of integration testing.
Would be interested to hear your views.
What are the ways in which data can be encrypted? Say for example salary column, even the admin should not be able to see the encrypted columns if possible, data should be visible only through application to users who have access which is defined in the application, changes in application (adding new functionality to encrypt/decrypt at application level) would be a last resort and minimal.
So far I have thought of 2 ways any fresh ideas or pros and cons of the ones below would be much appreciated:
1. Using Oracle TDE (transparent data encryption).
- Con : Admin can possibly grant himself rights to see the data
2. Creating a trigger to encrypt before insert and something along the lines of a pipeline to retrieve.
Oracle Database Vault is the only way to prevent a DBA from being able to access data stored in the database. That is an extra cost product, however, and it requires you to have an additional set of security admins whose job it is to grant the DBAs whatever privileges they actually need.
Barring that, you'd be looking at solutions that encrypt and decrypt the data in the application outside the database. That would involve making changes to the database structure (i.e. the salary column would be declared as a raw rather than a number). And it involves application changes to call the encryption and decryption routines. And that requires that you solve the key management problem which is generally where these sorts of solutions fail. Storing the encryption key somewhere that the application can retrieve it but somewhere that no admin can access is generally non-trivial. And then you need to ensure that the key is backed up and restored separately since the encrypted data in the database is useless without the key.
Most of the time, though, I'd tend to suggest that the right approach is to allow the DBA to see the data and audit the queries they run instead. If you see that one particular DBA is running queries for fun rather than occasionally looking at bits of data in the course of doing her job, you can take action at that point. Knowing that their queries are being audited is generally enough to keep the DBA from accessing data that she doesn't really need.
Most of the time,we just get the result from database,and then save it in cache server,with an expiration time.
When do we need to persistent that key/value pair,what's the significant benifit to do so?
If you need to persist the data, then you would want a key/value database. In particular, as part of the NoSQL movement, many people have suggested replacing traditional SQL databases with Key/Value pair databases - but ultimately, the choice remains with you which paradigm is a better fit for your application.
Use a key/value database when you are using a key/value cache and you don't need a sql database.
When you use memcached/mysql or similar, you need to write two sets of data access code - one for getting objects from the cache, and another from the database. If the cache is your database, you only need the one method, and it is usually simpler code.
You do lose some functionality by not using SQL, but in a lot of cases you don't need it. Only the worst applications actually leave constraint checking to the database. Ad-hoc queries become impractical at scale. The occasional lost or inconsistent record simply doesn't matter if you are working with tweets rather than financial data. How do you justify the added complexity of using a SQL database?