Oracle SELECT INTO OUTFILE, what's wrong with this query? - oracle

This is the query that I am executing from sqlplus:
select * into outfile 'my_file.txt'
fields terminated by '\t' lines terminated by '\n'
from my_table where my_column = 'stuff';
I get the following error:
FROM keyword not found where expected
What am I doing wrong?
P.S. I know that there are other ways to flush the output to file but I really want to win this against Oracle...

SELECT ... INTO OUTFILE is MySQL-specific syntax. It won't work on other DBMSs such as Oracle.
In Oracle you would surround the statement with SPOOL filename...SPOOL OFF.

Related

Error while compiling statement: FAILED: ParseException line 1:14 cannot recognize input near ''default'' '.' ''sales_withcomma'' in join source

I am running the below commands in Hive and have already imported the table 'sales_withcomma' however still now working
SELECT * FROM 'default'.'sales_withcomma'
ALTER TABLE sales_withcomma SET SERDE 'com.bizo.hive.serde.csv.CSVSerde'
Use double quotes, not single quotes around tables/schemas. But using quotes is a bad practice for tables (check this question for an explanation why that is the case). This should work:
SELECT * FROM default.sales_withcomma;

How to create a unix script to loop a Hive SELECT query by taking table names as input from a file?

It's pretty straightforward what I'm trying to do. I just need to count the records in multiple Hive tables.
I want to create a very simple hql script that takes a file.txt with table names as input and count the total number of records in each of them:
SELECT COUNT(*) from <tablename>
Output should be like:
table1 count1
table2 count2
table3 count3
I'm new to Hive and not very well versed in Unix scripting, and I'm unable to figure out how to create a script to perform this.
Can someone please help me in doing this? Thanks in advance.
Simple working shell script:
db=mydb
for table in $(hive -S -e "use $db; show tables;")
do
#echo "$table"
hive -S -e "use $db; select '$table' as table_name, count(*) as cnt from $table;"
done
You can improve this script and generate file with select commands or even single select with union all, then execute file instead of calling Hive for each table.
If you want to read table names from file, use this:
for table in filename
do
...
done

Error while exporting the results of a HiveQL query to CSV?

I am a beginner in Hadoop/Hive. I did some research to find out a way to export results of HiveQL query to CSV.
I am running below command line in Putty -
Hive -e ‘use smartsourcing_analytics_prod; select * from solution_archive_data limit 10;’ > /home/temp.csv;
However below is the error I am getting
ParseException line 1:0 cannot recognize input near 'Hive' '-' 'e'
I would appreciate inputs regarding this.
Run your command from outside the hive shell - just from the linux shell.
Run with 'hive' instead of 'Hive'
Just redirecting your output into csv file won't work. You can do:
hive -e 'YOUR QUERY HERE' | sed 's/[\t]/,/g' > sample.csv
like was offered here: How to export a Hive table into a CSV file?
AkashNegi answer will also work for you... a bit longer though
One way I do such things is to create an external table with the schema you want. Then do INSERT INTO TABLE target_table ... Look at the example below:
CREATE EXTERNAL TABLE isvaliddomainoutput (email_domain STRING, `count` BIGINT)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ","
STORED AS TEXTFILE
LOCATION "/user/cloudera/am/member_email/isvaliddomain";
INSERT INTO TABLE isvaliddomainoutput
SELECT * FROM member_email WHERE isvalid = 1;
Now go to "/user/cloudera/am/member_email/isvaliddomain" and find your data.
Hope this helps.

Passing date as command line arguments in Hive

I have my below query in test1.hql file. I am trying to pass the date (dt) as the command line argument.
select * from lip_data_quality where dt = '${hiveconf: start_date}';
So whenever I try to run the above test1.hql file from shell prompt like this-
hive -f hivetest1.hql -hiveconf start_date=20120709
I get zero records back. But the data is there in that table for that particular date. Why is it so? Something wrong I am doing?
Can anyone help me out here? I was following Bejoy's Article
I am working Hive 0.6
Eliminate the space between hiveconf: and start_date.
This may only be for string types, but Hive is picky in this respect.

Error when trying to load data from Unix script

when i am trying to execute the SQL script from Unix server its showing error but the same SQL i am running from sql navigator its working fine .. kindly help me on it..
INSERT INTO t_csocstudent_course_local
(SELECT tsct.student_id,
tsct.object_lookup_id,
tsct.course_id,
tsct.xcourse_id,
clt.NAME,
tsct.course_type,
FROM temp_stud_course tsct join course_local clt
on tsct.COURSE_ID = clt.COURSE_ID
WHERE TO_CHAR (sc_timestamp, 'YYYYMMDDHH24MISS') >
(SELECT TO_CHAR (MAX (sc_timestamp), 'YYYYMMDDHH24MISS')
FROM t_student_course_local)
AND tsct.xcourse_id IN
('EX1','EX2'));
Error :
Error in loading main table
Enter password:
SP2-0734: unknown command beginning "WHERE TO..." - rest of line ignored.
AND tsct.xcourse_id IN
*
ERROR at line 3:
ORA-00933: SQL command not properly ended
Thanks in advance !!
I can't remember if the Oracle command line client permits extra whitespace linebreaks. Remove the extra linebreak before the WHERE clause.
Update
From the documentation, an empty line terminates a SQL statement by default in SQLplus.
SQLT[ERMINATOR] {;|c|OFF|ON}|
Set the char used to end and execute SQL commands to c.
OFF disables the command terminator - use an empty line instead.
ON resets the terminator to the default semicolon (;).
You can change the behavior to use semicolons instead of empty lines:
SET SQLTERMINATOR ON

Resources