I have been working on a project where I have a DB2 Database System.
This system is gonna be not used anymore on behalf of Hadoop.
The source of the system are EBCDIC.
At first I thought to use tools such as SQOOP to clone the old dataBase.
Because I have the data file and the file where are written the SQL code to create the tables and I have not access to DB2 Database, I cannot use SQOOP, can I ?
Are there different ways to operate?
Related
I've been researching and looking for ideas but the only thing close to a solution i've found has been where someone used pyspark to convert an oracle table into hdfs and then from hdfs into cassandra but I was hoping there was another/a clear solution to this data migration.
Title suggests that it is Cassandra > Oracle. Message text says Oracle > HDFS > Cassandra (i.e. the opposite direction). What exactly are you trying to do?
Suppose it is the title that is correct. If there's no tool which would do the migration for you, from my - developer's - point of view, creating a database link in my Oracle schema which points to Cassandra might be a good option. Then I'd just write some SQL code to migrate data I need. Here's how: Access Cassandra Data as a Remote Oracle Database.
Shortly:
connect to Cassandra as an ODBC data source
set connection properties for compatibility with Oracle
configure the ODBC gateway, Oracle Net and Oracle database
write queries
I'm trying to export an Oracle DB using Oracle SQL Developer having tables, sequences, view, packages, etc. with dependencies on each other.
When I use Tools -> Database Export and select all DDL options, unfortunately the exported SQL file does not preserve the other that is some DB objects should be created before some other.
Is there a way to make the DB export utility preserve object dependencies/order? Or Is there any other tool do you use for this task?
Thank you
Normally expdp does a pretty good job. Problems arise when there are dependencies on objects/users that are not part of the dump. This is because the counter part, impdp, does not add grants on objects that are not created by impdp. I call that the 'not created by me syndrome' that impdp has.
If you have no external dependencies (external meaning to schema's that are not part of the dump), expdp/impdp do a good job for you. You might not be able to use it if you can not have access to the database server since expdp writes it's files on the database server.
If you happen to have access to a database server that is able to connect to the original database, you could pull the data over into your local database using a database link.
Data was sent to our company with PostgreSQL, but we are prohibited to use the tools of PostgreSQL , permitted the use of only Oracle.
How to migrate data from PostgreSQL to Oracle without using a third party application(they are also prohibited)? You can only use the tools of Oracle.
I found this article https://support.oracle.com/knowledge/Oracle%20Database%20Products/2220826_1.html but we don't have Support Identifier
We have one .sql file. It weighs 8 Gigabytes.
It looks like you have so many impediments in your company. Regarding Oracle's SQL Developer Migration Workbench, unfortunately it does not support the migration of PostgreSQL databases. However, the following 3rd-party software tools are available and may assist in migration, but I am afraid you cannot use them as you said that those products are forbidden:
http://www.easyfrom.net/download/?gclid=CNPntY36vsQCFUoqjgodHnsA0w#.VRBHKGPVuRQ
http://www.sqlines.com/postgresql-to-oracle
Other options will only move the data from your Postgresql database to Oracle database, it means that you must have the DDLs of the tables before to run the import:
To move data only, you can generate a flat file of the the
PostgreSQL data and use Oracle SQL*Loader.
Another option to migrate data only is to use Oracle Database
Gateway for ODBC which requires an ODBC driver to connect to the
PostreSQL database, and copy each table over the network using
sqlplus "COPY" or "CREATE TABLE AS SELECT" commands via oracle
database link.
Also, Oracle has discussion forum for migrating non-oracle databases to Oracle.
http://www.oracle.com/technetwork/database/migration/third-party-093040.html
But, if you have only a sql file, you should look at it to see whether you have both DDLs ( create tables and indexes, etc ) and the data itself as insert statements. If so, you need to split it and treat the DDLs to convert the original data types to Oracle datatypes.
I am an old Informatica PowerCenter 8 guy and am heading up a team using Informatica Big Data Edition 9.5.1. I have a question regarding Hive. Can Informatica build Hive tables or do they have to be built separately? If they can be built when 'Not Exists', what are the steps?
Thanks!
If you enable the below option in PC, this generate/create the HIVE tables
"Generate And Load Hive Table"
This is available only in PC, with Power Exchange for Hadoop
The way this works is, PC will first load into an HDFS file and then will create an HIVE definition on this and make sure you select "Externally Managed Hive Table", so that PC will create an external table.
Although at the mapping level, you need to define flat file as your target
What is Oracle SQL Loader and what is it used for?
SQL Loader is utility provided by Oracle which enables us to load data from flat files into database tables. It is well covered in the documentation (check the Utilities Guide). The key thing is that SQL Loader is an external OS program.
External tables were introduced in Oracle 9i, allowing us to define tables whose data is supplied from flat files. These provide most of the functionality of SQL Loader with a lot more convenience. For instance we can manipulate and re-format the data using SQL functions which is simpler than using SQL Loader's syntax. It also means that we can pull the data from inside the database rather than pushing it from the OS.
However, for loading huge volumes of data in ultra-quick time a well-tuned SQL Loader control file will beat external tables for performance. Also, if there is a complicated OS process associated with the data files - e.g. ftp, gunzip, pre-processing with sed or awk - it can be more convenient to call SQL Loader from inside the shell script rather than attempting to hook up with a database job. So SQL Loader is still useful in certain scenarios but it is not necessarily the automatic first choice.
It is one of Oracle's bulk data loading tools.
You use it to load data from flat files (such as CSV) into the database.
For details, please check their documentation (or this FAQ)
To transfer data from one Oracle database to another oracle database, we use oracle data pump. And in oracle versions previous to 10g we use oracle export/import. But if you want to transfer data from a non oracle database to an oracle database, you create a flat file of the data in the non oracle database and using SQL Loader load the data into oracle database.
Following is procedure to load the data from Third Party Database into Oracle using SQL Loader.
1.Convert the Data into Flat file using third party database command.
2.Create the Table Structure in Oracle Database using appropriate datatypes
3.Write a Control File, describing how to interpret the flat file and options to load the data.
4.Execute SQL Loader utility specifying the control file in the command line argument