How to format oracle connection string and output query to file - oracle

I am trying to connect to a oracle database, query it, and send results to a txt file. When I run my statement, this shows in the .txt file:
In reality, it should be values from my sql script.
Here is the string i am running:
sql_file1=Cb.sql
sqlplus -s "username/pwd#(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(Host=my_host)(Port=1521))(CONNECT_DATA=(SERVICE_NAME=my_ser_name))))" #sql/$sql_file1 > /home/path/to/my/files/'cb.txt'
Any reasons as to why my 'cb.txt' file shows the screenshot from above instead of any date from the query inside my sql file?

You have extra ) in your connection string:
(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(Host=my_host)(Port=1521))(CONNECT_DATA=(SERVICE_NAME=my_ser_name))))
should be
sqlplus -s "username/pwd#(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(Host=my_host)(Port=1521))(CONNECT_DATA=(SERVICE_NAME=my_ser_name)))" #sql/$sql_file1 > /home/path/to/my/files/cb.txt
But even easier to use EZConnect string:
sqlplus -s "username/pwd#//my_host:1521/my_ser_name" #sql/$sql_file1

Related

Hive query in Shell Script

I have an external hive table on top of a parquet file.
CREATE EXTERNAL TABLE parquet_test LIKE avro_test STORED AS PARQUET LOCATION 'hdfs://myParquetFilesPath';
I want to get the count of table using shell script.
I tried with following command
myVar =$(hive -S -e " select count(*) from parquet_test;")
echo $myVar
Added -S to run hive in silent mode still I get whole map reduce log and count in the myVar variable. How to get only count.
I don't have access to any of the configuration file to enable or disable the level of logging. Is there any other way?
Finally found a work around.
First flushed the query result into a file in HDFS then read answer from file.
The file only contains the result of the query.
(hive -S -e " INSERT OVERWRITE LOCAL DIRECTORY '/home/test/result/'
ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' select count(*) from parquet_test;")
Then reading the file into a variable
Count var=$(hdfs dfs -tail /home/test/result/)
echo $var
Thank you
myVar=$(eval "hive -S -e 'select count(*) from parquet_test;' ")
echo $myVar

Single command from batch file with sqlplus not working in some cases

I have the following command in a batch file.
set tableName=%1
select count(1) from %tableName% where to_char(DATEVALUE,'yyyy-mm-dd hh24:mi:ss')^>(select to_char(max(DATEVALUE),'yyyy-mm-dd hh24:mi:ss') from FOO_TABLE); | sqlplus !connectionString!
This statement doesn't work. I can see that it connects to the database and then disconnects. But the following works:
select count(1) from %tableName% where to_char(DATEVALUE,'yyyy-mm-dd hh24:mi:ss')=(select to_char(max(DATEVALUE),'yyyy-mm-dd hh24:mi:ss') from FOO_TABLE); | sqlplus !connectionString!
I am guessing the problem could be with the greater than > symbol. I tried ^>,> and \>. None of them works. How can I get this sql statement to work.
(I have connectionString already set in my batch file in earlier lines).
The output in the command line is
Connected to:
Oracle Database ... (more db info)
SQL> Disconnected from Oracle Database ... (more db info)
It looks like you need to escape the ^ escape character as well; depending on exactly how you're running this, either:
... where to_char(DATEVALUE,'yyyy-mm-dd hh24:mi:ss')^^>(select ...
or
... where to_char(DATEVALUE,'yyyy-mm-dd hh24:mi:ss')^^^>(select ...
In a batch file where the query is echoed and piped the triple-escape works:
#setlocal EnableDelayedExpansion
#set connectionString=x/y#z
#set tableName=bar
#echo select count(1) from %tableName% where to_char(DATEVALUE,'yyyy-mm-dd hh24:mi:ss')^^^>(select to_char(max(DATEVALUE),'yyyy-mm-dd hh24:mi:ss') from FOO_TABLE); | sqlplus !connectionString!
Running that batch script shows the statement being run (and erroring in my case with ORA-00942, which is expected). With a single or double ^ it has nothing to run at the SQL prompt and a file is created instead, which seems to be what you're seeing.

How to do BULK INSERT in Oracle Database

I am trying to do a bulk insert into tables from a CSV file using Oracle11. My problem is that the database is on a remote machine which I can sqlpl to using this:
sqlpl username#oracle.machineName
Unfortunately the sqlldr has trouble connecting using the following command:
sqlldr userid=userName/PW#machinename control=BULK_LOAD_CSV_DATA.ctl log=sqlldr.log
Error is:
Message 2100 not found; No message file for product=RDBMS, facility=ULMessage 2100 not found; No message file for product=RDBMS, facility=UL
Now having given up on this approach I tried writing a basic sql script, but I am unsure of the proper Oracle keyword for BULK. I know this works in MySql but I get:
unknown command beginning "BULK INSER..."
When running the script:
BULK INSERT <TABLE_NAME>
FROM 'CSVFILE.csv'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
)
GO
I don't care which one works! Either one will do, I just need a little help.
Sorry I am a dumb dumb! I forgot to add oracle/bin to my path!
If you have found this post, add the bin directory to your path (linux) using the following commands:
export ORACLE_HOME=/path/to/oracle/client
export PATH=$PATH:$ORACLE_HOME/bin
Sorry if I wasted anyone's time ....

Attempting to connect to an oracle database from shell script

I am trying to connect to an oracle database from a shell script ( I am a new user ) . The script will then pass a query and transfer the result to a variable called canadacount. I have written the code but it does not work
#this script will attempt to connect to a remote database CFQ143 with user ID 'userid' and password 'password'.
#After loggin in it will read data from the PLATFORMSPECIFIC table.
#We can pass a query 'select count (platform) from platformspecific where platform='CANADA';
#The result from this query will be passed to a variable called canadacount which we can then echo back to the user.
canadacount='$ORACLE_HOME/bin/sqlplus -s /nolog<<EOF
connect userid/passsword#CFQ143:1521:CFQ143
set pages 0 feed off
select count (platform) from platformspecific where platform='CANADA';
exit
EOF'
echo $canadacount
The answer is :
I changed the connect line to the following:
connect userid/passsword#CFQ143

How to determine the Schemas inside an Oracle Data Pump Export file

I have an Oracle database backup file (.dmp) that was created with expdp.
The .dmp file was an export of an entire database.
I need to restore 1 of the schemas from within this dump file.
I don't know the names of the schemas inside this dump file.
To use impdp to import the data I need the name of the schema to load.
So, I need to inspect the .dmp file and list all of the schemas in it, how do I do that?
Update (2008-09-18 13:02) - More detailed information:
The impdp command i'm current using is:
impdp user/password#database directory=DPUMP_DIR
dumpfile=EXPORT.DMP logfile=IMPORT.LOG
And the DPUMP_DIR is correctly configured.
SQL> SELECT directory_path
2 FROM dba_directories
3 WHERE directory_name = 'DPUMP_DIR';
DIRECTORY_PATH
-------------------------
D:\directory_path\dpump_dir\
And yes, the EXPORT.DMP file is in fact in that folder.
The error message I get when I run the impdp command is:
Connected to: Oracle Database 10g Enterprise Edition ...
ORA-31655: no data or metadata objects selected for the job
ORA-39154: Objects from foreign schemas have been removed from import
This error message is mostly expected. I need the impdp command be:
impdp user/password#database directory=DPUMP_DIR dumpfile=EXPORT.DMP
SCHEMAS=SOURCE_SCHEMA REMAP_SCHEMA=SOURCE_SCHEMA:MY_SCHEMA
But to do that, I need the source schema.
impdp exports the DDL of a dmp backup to a file if you use the SQLFILE parameter. For example, put this into a text file
impdp '/ as sysdba' dumpfile=<your .dmp file> logfile=import_log.txt sqlfile=ddl_dump.txt
Then check ddl_dump.txt for the tablespaces, users, and schemas in the backup.
According to the documentation, this does not actually modify the database:
The SQL is not actually executed, and the target system remains unchanged.
If you open the DMP file with an editor that can handle big files, you might be able to locate the areas where the schema names are mentioned. Just be sure not to change anything. It would be better if you opened a copy of the original dump.
Update (2008-09-19 10:05) - Solution:
My Solution: Social engineering, I dug real hard and found someone who knew the schema name.
Technical Solution: Searching the .dmp file did yield the schema name.
Once I knew the schema name, I searched the dump file and learned where to find it.
Places the Schemas name were seen, in the .dmp file:
<OWNER_NAME>SOURCE_SCHEMA</OWNER_NAME>
This was seen before each table name/definition.
SCHEMA_LIST 'SOURCE_SCHEMA'
This was seen near the end of the .dmp.
Interestingly enough, around the SCHEMA_LIST 'SOURCE_SCHEMA' section, it also had the command line used to create the dump, directories used, par files used, windows version it was run on, and export session settings (language, date formats).
So, problem solved :)
Assuming that you do not have the log file from the expdp job that generated the file in the first place, the easiest option would probably be to use the SQLFILE parameter to have impdp generate a file of DDL (based on a full import). Then you can grab the schema names from that file. Not ideal, of course, since impdp has to read the entire dump file to extract the DDL and then again to get to the schema you're interested in, and you have to do a bit of text file searching for the various CREATE USER statements, but it should be doable.
The running the impdp command to produce an sqlfile, you will need to run it as a user which has the DATAPUMP_IMP_FULL_DATABASE role.
Or... run it as a low privileged user and use the MASTER_ONLY=YES option, then inspect the master table. e.g.
select value_t
from SYS_IMPORT_TABLE_01
where name = 'CLIENT_COMMAND'
and process_order = -59;
col object_name for a30
col processing_status head STATUS for a6
col processing_state head STATE for a5
select distinct
object_schema,
object_name,
object_type,
object_tablespace,
process_order,
duplicate,
processing_status,
processing_state
from sys_import_table_01
where process_order > 0
and object_name is not null
order by object_schema, object_name
/
http://download.oracle.com/otndocs/products/database/enterprise_edition/utilities/pdf/oow2011_dp_mastering.pdf
Step 1: Here is one simple example. You have to create a SQL file from the dump file using SQLFILE option.
Step 2: Grep for CREATE USER in the generated SQL file (here tables.sql)
Example here:
$ impdp directory=exp_dir dumpfile=exp_user1_all_tab.dmp logfile=imp_exp_user1_tab sqlfile=tables.sql
Import: Release 11.2.0.3.0 - Production on Fri Apr 26 08:29:06 2013
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Username: / as sysdba
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Job "SYS"."SYS_SQL_FILE_FULL_01" successfully completed at 08:29:12
$ grep "CREATE USER" tables.sql
CREATE USER "USER1" IDENTIFIED BY VALUES 'S:270D559F9B97C05EA50F78507CD6EAC6AD63969E5E;BBE7786A5F9103'
Lot of datapump options explained here http://www.acehints.com/p/site-map.html
You need to search for OWNER_NAME.
cat -v dumpfile.dmp | grep -o '<OWNER_NAME>.*</OWNER_NAME>' | uniq -u
cat -v turn the dumpfile into visible text.
grep -o shows only the match so we don't see really long lines
uniq -u removes duplicate lines so you see less output.
This works pretty well, even on large dump files, and could be tweaked for usage in a script.
My solution (similar to KyleLanser's answer) (on a Unix box):
strings dumpfile.dmp | grep SCHEMA_LIST
In my case, based on Aldur's and slafs' answers I came up with this expression that should tell you just the name of the original schema:
cat -v file.dmp | grep 'SCHEMA_LIST' | uniq -u | grep -o -P '(?<=SCHEMAS\=).*(?=content)'
Tested for a DMP file from Oracle 19.8 version.

Resources