We are trying to setup a data warehousing using Oracle 11g, its CDC feature with HOTLOG_SOURCE mode, tungsten tool for replication and MariaDB, replication was working fine for sometime,, but since we changed the path of archived log file oracle, replication stopped working.
Oracle is just generating large redo files but these are not being written on MariaDB.
So, my concern is: if we setup CDC on oracle schema and then change log file path, will it cause any problem?
I'm developing an application which runs on an Oracle database. I'm now in the process of creating an installation package which will contain some SQL scripts containing the default data that comes with the program.
Some of the tables have BLOB columns which need to contain MS Word documents. I'm not sure how to get these BLOBs into the SQL scripts. I know I could do it through Data Pump, but it is a requirement that all database stuff is included in plain text SQL files.
Does anyone know how to get these BLOBs into an SQL script which the client can just run?
Thanks!
I solved this problem by creating a PHP script that is run as part of the installation process - it loops through all my word documents and inserts them into the database. I would still rather have SQL scripts or something similar but this works for now.
When I upload a *.cvs file into Oracle Express using the data upload tool, the column mapping changes. I have uploaded the same data before, to test things out.
I then deleted all the data in the table and now want to load in fresh data for further testing. The mapping looks good in the *.cvs file before I upload it but when I get to the last step the columns have changed position in the *.cvs file.
Any idea why does this happen and what can I do to prevent it?
I'm trying to load a table from SQL Server using the Microsoft OLE DB Provider into an Oracle table (using the Oracle Provider for OLE DB). The package is a straight forward OLE DB Source (SQL Server) -> OLE DB Destination (Oracle).
I'm using SQL Server 2008 R2 and Oracle 11g.
Every time I run the package, I get a different number of rows in the destination table, and BIDS reports fewer rows read than there are in the source table. The number of rows returned is different each time I run it. I get no errors or kickouts, but the boxes for the source and destination remain yellow even after BIDS says "Package completed successfully".
Dumping the source table into a flat file instead of the Oracle destination works fine, and I get all the rows that I expect. I can use this flat file to pull the information into the Oracle destination table without problems as well.
Even though I have a work-around, I want to understand why this is happening, and what I can do to resolve this problem without having to use flat files.
Edit: Looks like even using the flat file to Oracle doesn't bring over all the rows. The first time was just luck?
Edit/Update: Running the package out of Integration Services (not BIDS) seems to have eliminated the problem (tested three times). Still don't understand why this is happening though.
we get problem, while trying to export Oracle DB. OS: CentOS ~ 5.2 DB: Oracle 10g.
Exp command exports db files only in location:
/home/oracle/OraHome_1/oradata/master/xxx.dbf
, but tool can't export files in different location (we know about this files after getting trace) like this:
'/disk1/dblog06.dbf',
'/home/disk2/system01.dbf',
Please, advice me, how to get dump file. or buckup it.
Thanks.
You appear to have misunderstood what exp does, and particularly what the file parameter is for. The file is the output dump file, normally given a .dmp extension. Export takes data out of the database instance, it does not work under the hood on the datafiles - you have to tell it which data you want (full, user, tables, or tablespaces) and where to put it, not where it comes from.
If you really did try to exp file=/home/disk2/system01.dbf then what you actually asked it to do was trash your database; you're lucky that it did not overwrite the datafile and cause a catastrophic failure. Oracle seems to have saved you from yourself there, though possibly only thanks to having exclusive locks on the files at the time.
You need to read up on how it works and see if it actually does what you want - as APC notes it's not a backup tool. Looks at the Oracle documentation for your version, or somewhere like http://www.orafaq.com/wiki/Import_Export_FAQ, and also look at using data pump instead of exp.
I am not sure if that is the question, but the exp command will export database objects according to their logical schema (user name, table name). It does not matter which physical database file the data is coming from.
exp works through an Oracle instance, which needs to have mounted the datafiles.
Are these other files part of the Oracle database? Maybe another database? You need to find out which Oracle server uses them, and then run exp against that instance.
EXPORT is not a backup tool. It is meant for transferring data from one database to another, or perhaps from one schema to another.
If you want to recover your data in the event of a database crash or corruption then you need to use the appropriate tool. There are OS solutions to this, but Oracle comes with a sophisticated backup and recovery tool: RMAN. Find out more.