Oracle SecureFile - oracle

In my current workplace, an existing app is being replaced by a new 3rd party app. The database of the existing app, in Oracle 10g, needs to migrated. The existing app stored various documents as BLOBs. Per the new app's data model, they store the documents in files. I am tasked with conversion of existing BLOBs to files.
There are around 5 million records amounting to a total of 1 TB.
I am wondering if we can leverage the idea of Oracle SecureFile in this process. We do have some Oracle 11g environments available. This is my idea:
1) Import the existing 10g BLOBs into 11g SecureFiles.
2) Convert the Oracle SecureFiles (DBFS) to Windows file system (CIFS?).
The advantage with this idea is that the BLOB to File conversion process would be native and is taken care of by Oracle (in other words, performant, tested and exception-handled process). I have no clue about the file system conversion though.
Experts, is this a feasible idea? Dont know if this helps... but the new app is on Oracle 11gR2.

You can convert the blobs to docs and drop them in a dbfs. If you define the dbfs to use securefiles - recommended - and during the initial load with filesystem like logging, you have a good performant filesystem, comparable with nfs performance.
Problem with the windows environment is that you can not mount a dbfs native on windows (AFAIK). You could however mount it on Linux and pass in through to cifs. Not exactly an ideal solution but maybe useable as a workaround until native dbfs mount becomes available on Windows.
The filesystem like logging is good for performance, not for recovery or feeding to standby databases. This is because only the file metadata is logged, not the contents. You should include this knowledge in your recovery process, or swith to full logging after the initial load/conversion completes. That would be my preference.
dbfs is great, combined with advanced compression it can save quite a lot of space.

Related

Oracle DB as file storage

We think about migrating our artifactory from disk filestorage to store all the aritfacts as blobs in Oracle DB
https://www.jfrog.com/confluence/display/JFROG/Oracle
Unfortunately, there's not much info about such practice. So my question did anyone do it? My main concern is regarding performance. Is it as fast as local filestorage?
Technically, it's possible to use the Database for full metadata and binary storage in Artifactory, but this is not recommended.
From best practices for managing your artifactory filestore:
DATABASE
By default, Artifactory separates the binaries from their metadata.
Binaries are stored in the filestore and their metadata is stored in a
database. While it’s possible to store the binaries in the database as a
BLOB, it is not recommended because databases have limitations when
storing large files and this can lead to performance degradation and
negatively impact operational costs.
So it's not recommended. Database is usually slower than file or object storage.
Your best bet is to simply test it and see if it meets your required KPIs.
It's possible to store binary files within an Oracle DB, but it's not something I'd recommend doing for a massive amount of files.
A much better practice is to store the files within storage and use the database for the file paths.
The reason why I wouldn't recommend using Oracle is because of the retrieval speeds. You will put strain on the Database and it might end up slowing down slightly or even significantly.
The most I'd save on a database are single files directly linked to an entry, which are not often retrieved. Such as documentation in a .pdf file.

Oracle application - migration to Exadata server

We have an upcoming migration of our Oracle database to an Exadata server. I want to clarify some issues I have thought of:
Will there be any issues with the code - performance issues? Exadata has another type of optimizer, it doesn’t uses indexes, has a columnar optimizer, if I’m not misleading,
Currently there are some import or export files generated on the database server (accessed via Filezilla). I understand that at Exadata the database server is inaccessible, and I suspect that either:
• we will have to move those files to another server - Oracle knows only FTP (which has ports closed at our client) -> how do we write / read from another server? (as far as I understand, they would like to put all the files on the WAS server)
• or we will need to import the files into the table using the java application and process them from there (and the same with the exported files).
Files that come automatically from other applications can be written to the database server? Or we have the same problems as for the manual part.
We have plenty of database jobs that run KSH scripts on the database server - is there a problem with them? I understand they should also be moved to the WAS server, but I do not know how Oracle will call them from there.
Will there be any problems with Jenkins deployments? Anything changed? Here we save the SQL/PLSQL sources in some XML files, from which the whole application is restored (packages, configuration tables, nomenclatures ...) (with the exception of the working data) (the XML files are read through a procedure from an oracle directory).
If you can think of any other issues concerning this migration, any problems you have encountered during or after the migration to Exadata, please share!
Thank you,
Step by step:
On exadata you are going to have the same optimizer behaviour with some improvements because the exadata may improve full table scan performance thanks to smart full scans. Indeed the exadata is able to avoid retrieving data blocks in fts because it knows in advance they do not contain neeeded data.
In the exadata you can export to external servers DBFS file systems, that might be useful for external tables, imports/exports and so on.
You can write your files on the DBFS you can configure.
You could use your DBFS, if you want the ksh files are accessed from outside your exadata.
Let your oracle directory point to a directory in the DBFS file system where you put your xml files and you are done.

Tools for Oracle DB migration from AIX to Linux

My colleague running Oracle Database (11g) in AIX and they would like to move this DB to RHEL. I already found Link. However I would like to check if someone have already migrated or used any other best tools.
you have several options. As pointed out before, Oracle Data Pump is the easiest approach. It would lift you from every version >=10g upwards (or even back when you use the VERSION= parameter).
The caveat is:
Size of the database - and your downtime requirements.
In terms of larger databases, Transportable Tablespaces is the usual choice. More work as you will have to rebuild meta information such as synonyms, view, plsql, sequences etc - and in your case you'll have to either CONVERT the tablespaces as you are coming from a Big Endiann platform and going to a Little Endiann. DBMS_FILE_TRANSFER could assist you here as it can restore and covert at the same time whereas RMAN will need a 2-phase operation with staging space for it.
You can speed up transportable tablespaces with RMAN Incremental Backups to avoid most of the copy/convert time. And you can ease it with Full Transportable Export/Import (minimum source: 11.2.0.3 - minimum destination: 12.1.0.1) where Data Pump does the manual work of transportable tablespaces.
And of course there are other techniques such as Create-Table-As-Select or Insert-Append-Select options via Database Links and such.
Just see the big slide deck "Upgrade / Migrate / Consolidate to 12.2" for customer examples - and the "Migrate >230Tb in <24 hours" decks on my page: https://mikedietrichde.com/slides/
Cheers,
Mike
Is there some reason you can't just use Oracle Database Pump?
Create the database on RHEL, make sure you use a compatible character set.
https://docs.oracle.com/cd/B19306_01/server.102/b14215/dp_overview.htm

How to use Oracle data pump export utility to create dump file in local machine?

Oracle data pump export utility expect a parameter DIRECTORY (DBA_DIRECTORIES) which exist in DB server. Is it possible to map this directory to local machine or is there any other way to export multiple table to local from oracle database?
If using Data Pump, there is no direct way to store a dump file on your local machine. That is the way how Data Pump designed.
However, there is one of possible ways to achieve what you want. A workaround has two steps:
Run expdp as usual, which creates a dump file on server
Use ocp tool to transfer a dump file from a database server to your local machine (and back, if you want to).
An ocp tool stands for "Oracle Copy" and written exactly for the purpose of copying dump files back and forth from/to a database server. It is available here: https://github.com/maxsatula/ocp/releases/download/v0.1/ocp-0.1.tar.gz That is a source distribution, so once downloaded and unpacked, run ./configure && make
(Hopefully you do not have Windows on a client side, because I never tried to compile it there)
That is a simple command-line tool with a simple syntax. For example, this command will pull a file for you:
ocp <connection_string> DATA_PUMP_DIR:remote_file_name.dmp local_file_name.dmp
The tool uses a database connection and a minimum set of database privileges.
Update:
Finally I was able to adjust the source code and build ocp tool for Windows 32-bit:
https://github.com/maxsatula/ocp/releases/download/v0.1/ocp-0.1-win32.zip
Compiled/tested with 32-bit Instant Client 11.2.0.4 available here: http://www.oracle.com/technetwork/topics/winsoft-085727.html
instantclient-basiclite-nt-11.2.0.4.0.zip (20,258,449 bytes)
I believe it will work with a full Oracle Client installation too (just watch for bits, should be 32), however did not check myself.
Unfortunately, Windows build of ocp does not have a fancy progress meter during file transfer. That piece of code had too much *nix-specific stuff, so I had to cut it off.
Also, since it uses popt and zlib libraries, which are compiled as a part of GnuWin project, and available in 32-bit only, ocp for Windows is 32-bit only too. Hopefully, not having of a 64-bit version is not mission critical to you.
Update 2:
Warning! Make sure you always use DEDICATED server connection when download files from server, otherwise (for SHARED server) the downloaded copy of the file will be corrupted with no error messages!
With a bit of a hack you can get data pump to do what you want, but you need to have a database on your local machine.
What you need to do is create a database link on your local machine to the remote machine.
Then in the datapump options, login to the local database as the db link owner, specify the 'network_link' option to be the name of the database link name you created. That way it should export from the remote database through the local database and create the file on your local instance. For example:
expdp directory=<local_dir_object> network_link=<dblinkname on local instance> dumpfile=.. logfile=.. tables/schema=...
No, data pump sucks that way, but Oracle can get faster throughput using the same server the db sits on, so thats the tradeoff. Other enhancements too, but I still think this is a big disadvantage for data pump. Use old exp/imp or third party tools for this purpose.
You should ask yourself: "Why do I want to keep data outside the database - the most secure place for my data? Where backup,restore and recovery is in place.
If you are going to move data from database A to database B, make sure both databases have access to a common file-area where they can access the datadump-files through their directory-object and use the datapump.
If you still want to export data to client side you can use the good old tools exp and imp.

How can I replicate an Oracle 11g database(data+structure) on my local machine for development?

I am working on a test server with an Oracle 11g installed. I was wondering if there is anyway I can replicate the database(environment + data) on my local Linux machine. I am using a CentOS 5.3 on Windows XP with SUN Virtual Box. On Windows I am using sqldeveloper client to connect to the 11g database.
There are a number of ways to move the data over:
Restore an RMAN backup on your test server
Export and import the data using exp/expdp/imp/impdp
Export and import using a transportable tablespace (Further Info)
Use database links to duplicate the data using SQL
You can use the Database Configuration Assistant to generate a template from your production database. This will give you all the parameters and tablespaces, among other things. You will need to tweak the configuration somewhat; for instance the file paths may be wrong, and some parameters may need downsizing. You can then feed that template into DBCA to clone the database on you Linux machine.
To get the schemas and data you should use Data Pump (rather than the older Import / Export utlities). This can be run off the command line or from PL/SQL.
Bear in mind that using production data in a development or test environment can cause you to run foul of data protection laws and other compliance issues. It depends on what your application does and what jurisdiction you operate under. But if your production system contains citizens' personal data you need to be very careful. There are products out there which will apply masking as part of a data import process (Oracle sells one) but they tend to be expensive. Rolling your own masking product can be tricky: if this applies to your situation be sure to get your compliance staff (legal team) involved early.
I would suggest you install Oracle XE which is free to use on your local if your development is not something that is related to core database features. You can then use the methods given above to pump data into Oracle XE and compile your code on it, though for development I don't think you would need data as much as that in production.

Resources