We think about migrating our artifactory from disk filestorage to store all the aritfacts as blobs in Oracle DB
https://www.jfrog.com/confluence/display/JFROG/Oracle
Unfortunately, there's not much info about such practice. So my question did anyone do it? My main concern is regarding performance. Is it as fast as local filestorage?
Technically, it's possible to use the Database for full metadata and binary storage in Artifactory, but this is not recommended.
From best practices for managing your artifactory filestore:
DATABASE
By default, Artifactory separates the binaries from their metadata.
Binaries are stored in the filestore and their metadata is stored in a
database. While it’s possible to store the binaries in the database as a
BLOB, it is not recommended because databases have limitations when
storing large files and this can lead to performance degradation and
negatively impact operational costs.
So it's not recommended. Database is usually slower than file or object storage.
Your best bet is to simply test it and see if it meets your required KPIs.
It's possible to store binary files within an Oracle DB, but it's not something I'd recommend doing for a massive amount of files.
A much better practice is to store the files within storage and use the database for the file paths.
The reason why I wouldn't recommend using Oracle is because of the retrieval speeds. You will put strain on the Database and it might end up slowing down slightly or even significantly.
The most I'd save on a database are single files directly linked to an entry, which are not often retrieved. Such as documentation in a .pdf file.
Related
We have an upcoming migration of our Oracle database to an Exadata server. I want to clarify some issues I have thought of:
Will there be any issues with the code - performance issues? Exadata has another type of optimizer, it doesn’t uses indexes, has a columnar optimizer, if I’m not misleading,
Currently there are some import or export files generated on the database server (accessed via Filezilla). I understand that at Exadata the database server is inaccessible, and I suspect that either:
• we will have to move those files to another server - Oracle knows only FTP (which has ports closed at our client) -> how do we write / read from another server? (as far as I understand, they would like to put all the files on the WAS server)
• or we will need to import the files into the table using the java application and process them from there (and the same with the exported files).
Files that come automatically from other applications can be written to the database server? Or we have the same problems as for the manual part.
We have plenty of database jobs that run KSH scripts on the database server - is there a problem with them? I understand they should also be moved to the WAS server, but I do not know how Oracle will call them from there.
Will there be any problems with Jenkins deployments? Anything changed? Here we save the SQL/PLSQL sources in some XML files, from which the whole application is restored (packages, configuration tables, nomenclatures ...) (with the exception of the working data) (the XML files are read through a procedure from an oracle directory).
If you can think of any other issues concerning this migration, any problems you have encountered during or after the migration to Exadata, please share!
Thank you,
Step by step:
On exadata you are going to have the same optimizer behaviour with some improvements because the exadata may improve full table scan performance thanks to smart full scans. Indeed the exadata is able to avoid retrieving data blocks in fts because it knows in advance they do not contain neeeded data.
In the exadata you can export to external servers DBFS file systems, that might be useful for external tables, imports/exports and so on.
You can write your files on the DBFS you can configure.
You could use your DBFS, if you want the ksh files are accessed from outside your exadata.
Let your oracle directory point to a directory in the DBFS file system where you put your xml files and you are done.
I am new to ETL migration. I have worked with Talend, but not yet faced the task to migrate large ETL project from one tool to another(IBM Data Manager to Informatica PowerCenter or Informatica Developer).
I am looking to general guidlines for migrate jobs from one tool to another one, and of course for my specific case.
I will be more clear:
The Databases Sources and Targes will be the same, what I have to migrate is the ETL part itself.
The approach will be the parallel run as suggested at this blog :
Parallel Run
In my case I have not to migrate the all DWH instead only the ETL as the old software will become a legacy one and the new one is from another Vendor(luckly both of them can export XML ).
I am looking for the pratical approch for parallel run, indeed I am been suggested to copy the Sources and Targes Tables in the orginal Database schema, but it does not look to me the best way to go(even not pratical when a schema has many tables).
The DWH I am woking of course has several DBS instance in Oracle and some in SQL Server, a Test server and a Production one, as well as for each, a Staging, Storage and a Data Mart area.
As from this related question and its answer, I am thinking to copy each schema on the go for each project.
Staging in ETL: Best Practices
Looking to have guidlines references, but my specific case is the migration from IBM Data Manager to Informatica PowerCenter
The approach depends on various criteria and personal preferences. Either way you will need to either duplicate parts or all of the source and destination systems. On one extreme you can use two instances of the entire system. If you have complex upstream processes that are part of the test, or you have massive numbers of tables and processes, and you have the bandwidth and resources to duplicate your system then this approach may be optimal.
At the other extreme, if any complex processes occur within the ETL tool itself, or you are simply loading tables and need to check they are loaded correctly, then making copies of the tables and pointing your new or old tool to the table copies may be the way to go. This method is very simple and easy to setup.
Keep in mind this forum is not meant to replace blogs and in-depth tech articles on those techniques.
We are developing a large data migration from Oracle DB (12c) to another system with SSIS. The developers are using a production copy database but the problem is that, due to the complexity of the data transformation, we have to do things in stages by preprocessing data into intermediate helper tables which are then used further downstream. The problem is that all developers are using the same database and screw each other up by running things simultaneously. Does Oracle DB offer anything in terms of developer sandboxing? We could build a mechanism to handle this (e.g. have dev ID in the helper tables, then query views that map to the dev), but I'd much rather use built-in functionality. Could I use Oracle Multitenant for this?
We ended up producing a master subset database of select schemas/tables through some fairly elaborate PL/SQL, then made several copies of this master schema so each dev has his/her own sandbox (as Alex suggested). We could have used Oracle Data Masking and Subsetting but it's too expensive. Another option for creating the subset database wouldn have been to use Jailer. I should note that we didn't have a need to mask any sensitive data.
Note. I would think this a fairly common problem so if new tools and solutions arise, please post them here as answers.
In my current workplace, an existing app is being replaced by a new 3rd party app. The database of the existing app, in Oracle 10g, needs to migrated. The existing app stored various documents as BLOBs. Per the new app's data model, they store the documents in files. I am tasked with conversion of existing BLOBs to files.
There are around 5 million records amounting to a total of 1 TB.
I am wondering if we can leverage the idea of Oracle SecureFile in this process. We do have some Oracle 11g environments available. This is my idea:
1) Import the existing 10g BLOBs into 11g SecureFiles.
2) Convert the Oracle SecureFiles (DBFS) to Windows file system (CIFS?).
The advantage with this idea is that the BLOB to File conversion process would be native and is taken care of by Oracle (in other words, performant, tested and exception-handled process). I have no clue about the file system conversion though.
Experts, is this a feasible idea? Dont know if this helps... but the new app is on Oracle 11gR2.
You can convert the blobs to docs and drop them in a dbfs. If you define the dbfs to use securefiles - recommended - and during the initial load with filesystem like logging, you have a good performant filesystem, comparable with nfs performance.
Problem with the windows environment is that you can not mount a dbfs native on windows (AFAIK). You could however mount it on Linux and pass in through to cifs. Not exactly an ideal solution but maybe useable as a workaround until native dbfs mount becomes available on Windows.
The filesystem like logging is good for performance, not for recovery or feeding to standby databases. This is because only the file metadata is logged, not the contents. You should include this knowledge in your recovery process, or swith to full logging after the initial load/conversion completes. That would be my preference.
dbfs is great, combined with advanced compression it can save quite a lot of space.
had been looking towards this "Database Cloning" quite many times.. is it anything different from simply creating a copy of the database... please tell me keeping MySQL in mind...
Definition from Wikipedia:
A database clone is a complete and
separate copy of a database system
that includes the business data, the
DBMS software and any other
application tiers that make up the
environment. Cloning is a different
kind of operation to replication and
backups in that the cloned environment
is both fully functional and separate
in its own right. Additionally the
cloned environment may be modified at
its inception due to configuration
changes or data subsetting.
MySQL Documentation for cloning database objects:
http://dev.mysql.com/doc/refman/4.1/en/connector-net-visual-studio-cloning-database-objects.html