How to load Triggers to Teradata Server using bteq - oracle

We're migrating database from Oracle to Teradata.
We have .sql files with valid trigger DDL and .bteq files with .compile commands for these triggers. But when we run these .bteq files we get errors and trigger is not loaded.
For example, file td_instrg1.sql contains trigger definition:
CREATE TRIGGER TD_INSTRG1
AFTER INSERT
ON TD_EMPLOYEES
REFERENCING NEW AS X1
FOR EACH ROW
WHEN(X1.id is not null)
BEGIN ATOMIC
(INSERT INTO TD_EMPLOYEES1 VALUES(X1.id, X1.name, X1.monthly_income);)
END;
and file td_instrg1.bteq contains the following commands:
.logon vmdbsrv016/dbc, dbc;
DATABASE twm;
.compile FILE=td_instrg1.sql;
.logoff;
Please advise how to load triggers from scripts using bteq utility.

The .COMPILE command in BTEQ is reserved for the compilation of Teradata stored procedures. Your DDL statements for the triggers can be executed directly. If you have separate files containing the DDL you can reference them from within BTEQ using the .RUN command:
.logon vmdbsrv016/dbc, {password};
DATABASE twm;
.RUN FILE=td_instrg1.sql;
.logoff;

Related

How to pass values into sqlloader - Oracle

I have to prepare few scripts for importing data into the Oracle database, but I will have to run it on different databases.
For each table to be imported I have a data and control file:
table1.dat
table1.ctl
table2.dat
table2.ctl
etc..
For each table I have prepared separate .bat file that runs sqlloader :
table1.bat:
sqlldr login/password#database control=table1.ctl log=table1.log
It is easy and simple solution as slong as I don't have to run it on different databases and change login credentials.
What I wolud like to do is have one file with login and password that runs loading scripts for each table.
Have you got any suggestions how it could be done?
Regards
Pawel
I hope I understood your question.
In your .bat file you can connect to any database but you sqlldr login decides on which database the import is started.
I would call a start.sql in the .bat file where I do something like this:
-- database 1
host sqlldr login/password#database1 control=table1.ctl log=table1_db1.log
host sqlldr login/password#database1 control=table2.ctl log=table2_db1.log
-- database 2
host sqlldr login/password#database2 control=table1.ctl log=table1_db2.log
host sqlldr login/password#database2 control=table2.ctl log=table2_db2.log
An other option is to call import_db1.sql in your start file en write your code concerning database 1, etc.
start.sql
##import_db1.sql
##import_db2.sql
import_db1.sql
-- database 1
host sqlldr login/password#database1 control=table1.ctl log=table1_db1.log data=csvfile.csv
host sqlldr login/password#database1 control=table2.ctl log=table2_db1.log data=csvfile.csv
etc.
Your issue isn't very clear, however it sounds like you just want to source username/password per server. In which case for bash you can do:
. /dir/to/file/.sql_password_file
where sql_password_file has the entry:
SQLLDRLOGON='user/pass'
then in your script you can do
sqlldr userid=$SQLLDRLOGON control=table1.ctl log=table1.log
I would look into changing your script to a loop too e.g.
for load in table1 table2
do
loads="control=${load}.ctl bad=${load}.bad log=${load}.log"
sqlldr $SQLLDRLOGON $loads
etc...

How to do BULK INSERT in Oracle Database

I am trying to do a bulk insert into tables from a CSV file using Oracle11. My problem is that the database is on a remote machine which I can sqlpl to using this:
sqlpl username#oracle.machineName
Unfortunately the sqlldr has trouble connecting using the following command:
sqlldr userid=userName/PW#machinename control=BULK_LOAD_CSV_DATA.ctl log=sqlldr.log
Error is:
Message 2100 not found; No message file for product=RDBMS, facility=ULMessage 2100 not found; No message file for product=RDBMS, facility=UL
Now having given up on this approach I tried writing a basic sql script, but I am unsure of the proper Oracle keyword for BULK. I know this works in MySql but I get:
unknown command beginning "BULK INSER..."
When running the script:
BULK INSERT <TABLE_NAME>
FROM 'CSVFILE.csv'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
)
GO
I don't care which one works! Either one will do, I just need a little help.
Sorry I am a dumb dumb! I forgot to add oracle/bin to my path!
If you have found this post, add the bin directory to your path (linux) using the following commands:
export ORACLE_HOME=/path/to/oracle/client
export PATH=$PATH:$ORACLE_HOME/bin
Sorry if I wasted anyone's time ....

Oracle dump file table data extraction to file (original exp format)

I have Oracle dump files created with original exp (not expdp) (EXPORT:V10.02.01, Oracle 10g). They contain only table data for four tables.
1) I want to extract the table data into files (flat/fixed-width, CVS, or other text file) without importing them into another Oracle DB. [preferred]
2) Alternatively, I need a solution that can import them into an ordinary user (not SYSDBA) so that I can use other tools to extract the data.
My databases are 11g, but I can find 10g databases if needed. I have TOAD for Oracle Xpert 11.6.1.6 as my disposal. I am a moderately experieinced Oracle programmer, but I haven't worked with EXP/IMP before.
(The information below has been obscured to protect the data.)
Here's how the dump files were created:
exp FILE=data.dmp \
LOG=data.log \
TABLES=USER1.TABLE1,USER1.TABLE2,USER1.TABLE3,USER1.TABLE4 \
INDEXES=N TRIGGERS=N CONSTRAINTS=N GRANTS=N
Here's the log:
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
Note: grants on tables/views/sequences/roles will not be exported
Note: indexes on tables will not be exported
Note: constraints on tables will not be exported
About to export specified tables via Conventional Path ...
Current user changed to USER1
. . exporting table TABLE1 271 rows exported
. . exporting table TABLE2 272088 rows exported
. . exporting table TABLE3 2770 rows exported
. . exporting table TABLE4 21041 rows exported
Export terminated successfully without warnings.
Thank you in advance.
UPDATE:
TOAD version 9.7.2 will read a "dmp" file generated by EXP.
Select DATABASE -> EXPORT -> EXPORT FILE BROWSER from the menus.
You need to have the DBA utilities for TOAD installed. There is no real guarantee that the file is parsed
correctly, but the data will show up in TOAD in the schema browser.
NOTE: The only other known utility that will a dmp file generated by the exp utility is the imputility. You cannot read the dump file yourself. If you do, you run the risk of parsing the file incorrectly.
If you already have the data in an ORACLE table:
To extract the table data into a file, create a shell script that calls SQL*PLUS and causes SQL*PLUS to spool the table data to a file. You need one script per table.
#!/bin/sh
#NOTE: The path to sqlplus will vary on your system,
# but it is generally $ORACLE_HOME/bin/sqlplus.
#YOU NEED TO UNCOMMENT THESE LINES AND SET APPROPRIATELY.
#export ORACLE_SID=YOUR_SID
#export ORACLE_HOME=PATH_TO_YOUR_ORACLE_HOME
#export PATH=$PATH:$ORACLE_HOME/bin
sqlplus -s user/pwd#db << EOF
set pagesize 0
set linesize 0
set linesize 255
set heading off
set echo off
SPOOL TABLE1_DATA.txt
REM FOR EACH COLUMN IN TABLE1, SET THE FORMAT
COL FIELD_ID format 999,999,999
COL FIELD_DATA format a99
select FIELD_ID,FIELD_DATA from TABLE1;
SPOOL OFF
EOF
Make sure you set the line size of each line and set the format of each column. See FIELD_ID above for a number format column and FIELD_DATA for a character column.
NOTE: You need to remove the "N rows selected" from the end of the file.
(You can still import the file you created into another schema using the imputility.)

Checking the table existence and loading the data into Hbase and HIve table

I have data in HDFS. And I wanted to load that data into hbase and hive table.
I have written a bash shell script in which I have written a pig script to load the data form HDFS to HBASE and also written hive script to load the data from HDFS to HIVE table which are working perfectly fine.Here my HDFS data files are with the same structure and I'm loading all the data files into single hbase and hive table.
Now my query is suppose if I receive some more data files in HDFS directory and if I run the shell script again it will create hbase and hive table again with the same name and tells table already exists. How can I write a hive and hbase query so that 1st it will check for the table existence, if table does not exists it create the table for the 1st time and load the data from HDFS to HBASE & Hive table. If the table is already exists then it will just insert the data into an existing hbase and hive table. It should not overwrite the data alreday exists in the tables.
How this can be done ?
Below is my script file: myScript.sh
echo "create 'goodtable','gt'" | hbase shell
pig -f a.pig -param input=/user/user/d/
hive -f h.hql
Where a.pig :
G = LOAD '$input' USING PigStorage(',') as (c1:chararray, c2:chararray,c3:chararray,c4:chararray,c5:chararray);
STORE G INTO 'hbase://goodtable' USING org.apache.pig.backend.hadoop.hbase.HBaseStorage('gt:name gt:state gt:phone_no gt:gender');
h.hql:
create external table hive_table(
id int,
name string,
state string,
phone_no int,
gender string) row format delimited fields terminated by ',' stored as textfile;
LOAD DATA INPATH '/user/user/d/' INTO TABLE hive_table;
I just wanted to add an example for HBase as Hive was already covered before:
if [[ $(echo "exists 'goodtable'" | hbase shell | grep 'not exist') ]];
then
echo "create 'goodtable','gt'" | hbase shell;
fi
For HIVE, you can add the command IF NOT EXISTS in the CREATE TABLE statement. See the documentation
I don't have much experience on Hbase, but I believe you can use EXISTS table_name command to check whether the table exists and then create the table is it doesn't exist. See here
#visakh is correct - you can see if table exists in HBase by entering the HBase shell, and typing : exists '<tablename>
In order to do this without entering the HBase shell interactively, you can create a simple ruby script such as the following:
exists 'mytable'
exit
Let's say you save this to a file called tabletest.rb. You can then execute this script by calling hbase shell tabletest.rb. This will create the following output, which you can then parse from your shell script:
Table tableisthere does exist
0 row(s) in 0.9830 seconds
OR
Table tableisNOTthere does not exist
0 row(s) in 0.9830 seconds
Adding more details for 'all in one' script:
Alternatively, you can create a more advanced script in ruby that checks for table existence and then will create it if needed - this is done calling the HBaseAdmin java api from within the ruby script.
conf = HBaseConfiguration.new
hbaseAdmin = HBaseAdmin.new(conf)
if !hbaseAdmin.tableExists('mytable')
hbaseAdmin.createTable('mytable',...)
end

Howto import an oracle dump in an different tablespace

I want to import an oracle dump into a different tablespace.
I have a tablespace A used by User A. I've revoked DBA on this user and given him the grants connect and resource. Then I've dumped everything with the command
exp a/*** owner=a file=oracledump.DMP log=log.log compress=y
Now I want to import the dump into the tablespace B used by User B. So I've given him the grants on connect and resource (no DBA). Then I've executed the following import:
imp b/*** file=oracledump.DMP log=import.log fromuser=a touser=b
The result is a log with lots of errors:
IMP-00017: following statement failed with ORACLE error 20001: "BEGIN DBMS_STATS.SET_TABLE_STATS
IMP-00003: ORACLE error 20001 encountered
ORA-20001: Invalid or inconsistent input values
After that, I've tried the same import command but with the option statistics=none. This resulted in the following errors:
ORA-00959: tablespace 'A_TBLSPACE' does not exist
How should this be done?
Note: a lot of columns are of type CLOB. It looks like the problems have something to do with that.
Note2: The oracle versions are a mixture of 9.2, 10.1, and 10.1 XE. But I don't think it has to do with versions.
You've got a couple of issues here.
Firstly, the different versions of Oracle you're using is the reason for the table statistics error - I had the same issue when some of our Oracle 10g Databases got upgraded to Release 2, and some were still on Release 1 and I was swapping .DMP files between them.
The solution that worked for me was to use the same version of exp and imp tools to do the exporting and importing on the different Database instances. This was easiest to do by using the same PC (or Oracle Server) to issue all of the exporting and importing commands.
Secondly, I suspect you're getting the ORA-00959: tablespace 'A_TBLSPACE' does not exist because you're trying to import a .DMP file from a full-blown Oracle Database into the 10g Express Edition (XE) Database, which, by default, creates a single, predefined tablespace called USERS for you.
If that's the case, then you'll need to do the following..
With your .DMP file, create a SQL file containing the structure (Tables):
imp <xe_username>/<password>#XE file=<filename.dmp> indexfile=index.sql full=y
Open the indexfile (index.sql) in a text editor that can do find and replace over an entire file, and issue the following find and replace statements IN ORDER (ignore the single quotes.. '):
Find: 'REM<space>' Replace: <nothing>
Find: '"<source_tablespace>"' Replace: '"USERS"'
Find: '...' Replace: 'REM ...'
Find: 'CONNECT' Replace: 'REM CONNECT'
Save the indexfile, then run it against your Oracle Express Edition account (I find it's best to create a new, blank XE user account - or drop and recreate if I'm refreshing):
sqlplus <xe_username>/<password>#XE #index.sql
Finally run the same .DMP file you created the indexfile with against the same account to import the data, stored procedures, views etc:
imp <xe_username>/<password>#XE file=<filename.dmp> fromuser=<original_username> touser=<xe_username> ignore=y
You may get pages of Oracle errors when trying to create certain objects such as Database Jobs as Oracle will try to use the same Database Identifier, which will most likely fail as you're on a different Database.
If you're using Oracle 10g and datapump, you can use the REMAP_TABLESPACE clause. example:
REMAP_TABLESPACE=A_TBLSPACE:NEW_TABLESPACE_GOES_HERE
For me this work ok (Oracle Database 10g Express Edition Release 10.2.0.1.0):
impdp B/B full=Y dumpfile=DUMP.dmp REMAP_TABLESPACE=OLD_TABLESPACE:USERS
But for new restore you need new tablespace
P.S. Maybe useful http://www.oracle-base.com/articles/10g/OracleDataPump10g.php
What version of Oracle are you using? If its 10g or greater, you should look at using Data Pump instead of import/export anyway. I'm not 100% sure if it can handle this scenario, but I would expect it could.
Data Pump is the replacement for exp/imp for 10g and above. It works very similar to exp/imp, except its (supposedly, I don't use it since I'm stuck in 9i land) better.
Here is the Data Pump docs
The problem has to do with the CLOB columns. It seems that the imp tool cannot rewrite the create statement to use another tablespace.
Source: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:66890284723848
The solution is:
Create the schema by hand in the correct tablespace. If you do not have a script to create the schema, you can create it by using the indexfile= of the imp tool.
You do have to disable all constraints your self, the oracle imp tool will not disable them.
After that you can import the data with the following command:
imp b/*** file=oracledump.dmp log=import.log fromuser=a touser=b statistics=none ignore=y
Note: I still needed the statistics=none due to other errors.
extra info about the data pump
As of Oracle 10 the import/export is improved: the data pump tool ([http://www.oracle-base.com/articles/10g/OracleDataPump10g.php][1])
Using this to re-import the data into a new tablespace:
First create a directory for the temporary dump:
CREATE OR REPLACE DIRECTORY tempdump AS '/temp/tempdump/';
GRANT READ, WRITE ON DIRECTORY tempdump TO a;
Export:
expdp a/* schemas=a directory=tempdump dumpfile=adump.dmp logfile=adump.log
Import:
impdp b/* directory=tempdump dumpfile=adump.dmp logfile=bdump.log REMAP_SCHEMA=a:b
Note: the dump files are stored and read from the server disk, not from the local (client) disk
my solution is to use GSAR utility to replace tablespace name in the DUMP file. When you do replce, make sure that the size of the dump file unchanged by adding spaces.
E.g.
gsar -f -s"TSDAT_OV101" -r"USERS " rm_schema.dump rm_schema.n.dump
gsar -f -s"TABLESPACE """USERS """ ENABLE STORAGE IN ROW CHUNK 8192 RETENTION" -r" " rm_schema.n1.dump rm_schema.n.dump
gsar -f -s"TABLESPACE """USERS """ LOGGING" -r" " rm_schema.n1.dump rm_schema.n.dump
gsar -f -s"TABLESPACE """USERS """ " -r" " rm_schema.n.dump rm_schema.n1.dump
I wanna improve for two users both in different tablespaces on different servers (databases)
1.
First create a directories for the temporary dump for both servers (databases):
server #1:
CREATE OR REPLACE DIRECTORY tempdump AS '/temp/old_datapump/';
GRANT READ, WRITE ON DIRECTORY tempdump TO old_user;
server #2:
CREATE OR REPLACE DIRECTORY tempdump AS '/temp/new_datapump/';
GRANT READ, WRITE ON DIRECTORY tempdump TO new_user;
2.
Export (server #1):
expdp tables=old_user.table directory=tempdump dumpfile=adump.dmp logfile=adump.log
3.
Import (server #2):
impdp directory=tempdump dumpfile=adump_table.dmp logfile=bdump_table.log
REMAP_TABLESPACE=old_tablespace:new_tablespace REMAP_SCHEMA=old_user:new_user
The answer is difficult, but doable:
Situation is: user A and tablespace X
import your dump file into a different database (this is only necessary if you need to keep a copy of the original one)
rename tablespace
alter tablespace X rename to Y
create a directory for the expdp command en grant rights
create a dump with expdp
remove the old user and old tablespace (Y)
create the new tablespace (Y)
create the new user (with a new name) - in this case B - and grant rights (also to the directory created with step 3)
import the dump with impdp
impdp B/B directory=DIR dumpfile=DUMPFILE.dmp logfile=LOGFILE.log REMAP_SCHEMA=A:B
and that's it...
Because I wanted to import (to Oracle 12.1|2) a dump that was exported from a local development database (18c xe), and I knew that all my target databases will have an accessible tablespace called DATABASE_TABLESPACE, I just created my schema/user to use a new tablespace of that name instead of the default USERS (to which I have no access on the target databases):
-- don't care about the details
CREATE TABLESPACE DATABASE_TABLESPACE
DATAFILE 'DATABASE_TABLESPACE.dat'
SIZE 10M
REUSE
AUTOEXTEND ON NEXT 10M MAXSIZE 200M;
ALTER DATABASE DEFAULT TABLESPACE DATABASE_TABLESPACE;
CREATE USER username
IDENTIFIED BY userpassword
CONTAINER=all;
GRANT create session TO username;
GRANT create table TO username;
GRANT create view TO username;
GRANT create any trigger TO username;
GRANT create any procedure TO username;
GRANT create sequence TO username;
GRANT create synonym TO username;
GRANT create synonym TO username;
GRANT UNLIMITED TABLESPACE TO username;
An exp created from this makes imp happy on my target.
---Create new tablespace:
CREATE TABLESPACE TABLESPACENAME DATAFILE
'D:\ORACL\ORADATA\XE\TABLESPACEFILENAME.DBF' SIZE 350M AUTOEXTEND ON NEXT 2500M MAXSIZE UNLIMITED
LOGGING
PERMANENT
EXTENT MANAGEMENT LOCAL AUTOALLOCATE
BLOCKSIZE 8K
SEGMENT SPACE MANAGEMENT MANUAL
FLASHBACK ON;
---and then import with below command
CREATE USER BVUSER IDENTIFIED BY VALUES 'bvuser' DEFAULT TABLESPACE TABLESPACENAME
-- where D:\ORACL is path of oracle installation

Resources