sql cannot operate correctly in Redshift - jdbc

I want to run sql like:
CREATE TEMPORARY TABLE tmptable AS SELECT * FROM redshift_table WHERE date > #{date};
I can run this sql in command line in Redshift, but if I run it in my program, it doesn't work correctly. When I change CREATE TEMPORARY TABLE to CREATE TABLE it works correctly.
I am using mybatis as OR mapper and driver is:
org.postgresql.Driver
org.postgresql:postgresql:9.3-1102-jdbc41
What's wrong?

I am assuming the #date is an actual date in your actual query.
Having said that, there is not reason this command doesnt work, its as per the syntax listed here,
http://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_TABLE_AS.html
Have you tried posting it on AWS Redshift forums, generally they are quite responsive. Please update this thread too if you find something, this is quite an interesting issue, thanks!

Related

Oracle: How to efficiently copy a table from one schema to another on a different database and server

I have a large table (3.5MM records) that I need to copy from one schema/database to another schema/database. I tried TOAD's copy data from table feature, but got errors and it never fully copied, in part because the connection keeps getting dropped. I'm trying the object copy feature of SQLDeveloper, and after 11 minutes, it's still copying. I tried the SQLPlus COPY statement but got a syntax error (help needed). I'm still open to extracting the data as INSERT statements that I can just run directly.
1) SQLPLUS Copy as follows:
copy from report_new/mypassword#(DESCRIPTION= (ADDRESS=(PROTOCOL=TCP)(HOST=10.15.15.20)(PORT=1541))(CONNECT_DATA=(SERVICE_NAME=STAGE))) to report/mypassword#(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.18.22.25)(PORT=1550))(CONNECT_DATA=(SERVICE_NAME=DEV))) CREATE USER_USAGE_COUNT USING SELECT * FROM _USER_USAGE_COUNT
The above gives me
SQL> start copy_user_count_table.sql
SP2-0758: FROM clause missing username
2) I tried TOAD
The TOAD "Copy data to another schema" fails due to the connection getting
dropped. I set the commit threshold first to 5000 then to 500.
3) I'm trying SQLDeveloper's copy function, but I think it's not going to finish anytime soon and it gives me no real progress indications. For all I know, it could be hung but that it just doesn't want to tell me.
4) I thought about creating a datalink, but I don't have the authority to create one, and it's in a corporate environment wherein the DBA's don't respond in under 3 days.
Todo: Should I write my own Java code to just do this one record at a time?? I shouldn't have to do this, but somehow it's easier to send a man to the moon than to copy data from one schema to another.
You can use the copy command of sqlcl which is part of newer SQLdeveloper releases. The sqlcl is found in the Sqldeveloper\bin directory and is named sql.exe (Windows) or sql (Unix/Linux/Mac). The steps to follow are:
Connect to Destination database with sqlcl
sql username/password#destindationdb
Use the copy command
copy from username#sourcedatabase create newtablename using select * from sourcetable;

Read Oracle Cluster name from Oracle RAC using SQL query

I'd like to know what is my RAC cluster name using SQL query. I've found out that it can be retrieved using Oracle tool cemutlo -n or just ocrdump (see http://www.br8dba.com/tag/how-to-display-oracle-cluster-name/). However, it's not possible in this case, because on target environment, I can only execute SQL queries and I don't have access to DBMS installation directory.
I've found out (here https://community.oracle.com/thread/2510788?tstart=0) that it can be done using some unusual queries:
SELECT a.ID, a.CLUSTER_ID FROM TABLE(DBMS_DATA_MINING.GET_MODEL_DETAILS_OC('CLUS_OC_1_15',NULL,NULL,1,0,0)) a
select * from table(dbms_data_mining.get_model_details_km('CLUS_KM_1_25'))
However, they don't work on my environment and I'm unable to create new model.
Most preferably, I'd just read this from some kind of v$/gv$ tables - but I can't find it there. I guess that's because cluster is far below DBMS.
Finally, I found out that there is no way to do that :(.

No records when using SELECT query on DBLINK in Oracle

I'm trying to move data in between two oracle databases say SOURCE_A and DEST_B. I have created a dblink (LINK_A) using TOAD on DEST_B to SOURCE_A to copy data from the tables. Dblink creation was fine, but when I used a select statement like below, I see no data except column names.
SELECT * FROM TABLE_A#LINK_A;
Could you please help me understand what am I doing wrong or missing here. I tried running a DESC on the TABLE_A using the link and it worked fine. Not sure why its not pulling any data from the SOURCE_A database.
Any help is greatly appreciated. Thanks.
Alright, with many trial and errors, I managed to get a solution that works for me in this SO question.
I used the technique provided by - Jeremy Scoggins and it worked like charm. I was able to move data using toad and its perfect. Thanks to all of you for your time and support.
Appreciate it.

SAS 9.2 running Oracle query indefinitely

I'm running a pretty large query against Oracle database using SAS for windows 9.2. This query is pretty large where in I wrote a sub-query in WITH clause and used it 4 times. This runs fine on SQL PLUS and SQL Developer, but when i run it using SAS, the program hangs up after 20 mins and I can't even see the log window. I have never worked with SAS and not sure how to proceed but tried following option:
I created a SAS code file and ran it from windows batch file hoping to get log written to windows file system, but even this runs in-definitely and I don't see anything written to log file
Can some one direct me here. How can i use ALTLOG command to get log file written to windows file system so that i can understand the exact error message. By the way DBA's have mentioned that query runs fine and rows are returned from server side, but for some reason SAS program is not able to show this data. I get about 45,000 records from the query.
Thanks
I'll break it into two points:
1) running an existing Oracle SQL query in SAS without ever using SAS:
best way for you is to embed your Oracle SQL code in so called PROC SQL explicit pass-through:
proc sql;
connect to oracle as db1 (user=user1 pw=pasw1 path=DB1);
create table test_table as
select *
from connection to db1
( /* here we're in oracle */
select * from test.table1 where rownum <20
)
;
disconnect from db1;
quit;
(borrowed from my answer to another question Limiting results in PROC SQL)
The point is not to try to translate it to SAS SQL (don't know if you tried or not).
Also make sure you're creating a SAS table (as in the example) from query result, not writing it to SAS OUTPUT window.
2) Regarding getting the log: the log about an action is in general written once it's done, so if the query is really running for a long time, you won't see any intermediate logs.
Anyway, log buffering is the default setting for batch jobs, so log messages are written after the buffer is full.
To get log messages written immediately to the log file set LOGPARM option:
-LOGPARM= “WRITE=IMMEDIATE”
the opposite option is BUFFERED.
To find out the config file(s) used run following in your SAS session:
proc options option=config;run;
Then enter the option above on separate line in the config file.

How to find out when an Oracle table was updated the last time

Can I find out when the last INSERT, UPDATE or DELETE statement was performed on a table in an Oracle database and if so, how?
A little background: The Oracle version is 10g. I have a batch application that runs regularly, reads data from a single Oracle table and writes it into a file. I would like to skip this if the data hasn't changed since the last time the job ran.
The application is written in C++ and communicates with Oracle via OCI. It logs into Oracle with a "normal" user, so I can't use any special admin stuff.
Edit: Okay, "Special Admin Stuff" wasn't exactly a good description. What I mean is: I can't do anything besides SELECTing from tables and calling stored procedures. Changing anything about the database itself (like adding triggers), is sadly not an option if want to get it done before 2010.
I'm really late to this party but here's how I did it:
SELECT SCN_TO_TIMESTAMP(MAX(ora_rowscn)) from myTable;
It's close enough for my purposes.
Since you are on 10g, you could potentially use the ORA_ROWSCN pseudocolumn. That gives you an upper bound of the last SCN (system change number) that caused a change in the row. Since this is an increasing sequence, you could store off the maximum ORA_ROWSCN that you've seen and then look only for data with an SCN greater than that.
By default, ORA_ROWSCN is actually maintained at the block level, so a change to any row in a block will change the ORA_ROWSCN for all rows in the block. This is probably quite sufficient if the intention is to minimize the number of rows you process multiple times with no changes if we're talking about "normal" data access patterns. You can rebuild the table with ROWDEPENDENCIES which will cause the ORA_ROWSCN to be tracked at the row level, which gives you more granular information but requires a one-time effort to rebuild the table.
Another option would be to configure something like Change Data Capture (CDC) and to make your OCI application a subscriber to changes to the table, but that also requires a one-time effort to configure CDC.
Ask your DBA about auditing. He can start an audit with a simple command like :
AUDIT INSERT ON user.table
Then you can query the table USER_AUDIT_OBJECT to determine if there has been an insert on your table since the last export.
google for Oracle auditing for more info...
SELECT * FROM all_tab_modifications;
Could you run a checksum of some sort on the result and store that locally? Then when your application queries the database, you can compare its checksum and determine if you should import it?
It looks like you may be able to use the ORA_HASH function to accomplish this.
Update: Another good resource: 10g’s ORA_HASH function to determine if two Oracle tables’ data are equal
Oracle can watch tables for changes and when a change occurs can execute a callback function in PL/SQL or OCI. The callback gets an object that's a collection of tables which changed, and that has a collection of rowid which changed, and the type of action, Ins, upd, del.
So you don't even go to the table, you sit and wait to be called. You'll only go if there are changes to write.
It's called Database Change Notification. It's much simpler than CDC as Justin mentioned, but both require some fancy admin stuff. The good part is that neither of these require changes to the APPLICATION.
The caveat is that CDC is fine for high volume tables, DCN is not.
If the auditing is enabled on the server, just simply use
SELECT *
FROM ALL_TAB_MODIFICATIONS
WHERE TABLE_NAME IN ()
You would need to add a trigger on insert, update, delete that sets a value in another table to sysdate.
When you run application, it would read the value and save it somewhere so that the next time it is run it has a reference to compare.
Would you consider that "Special Admin Stuff"?
It would be better to describe what you're actually doing so you get clearer answers.
How long does the batch process take to write the file? It may be easiest to let it go ahead and then compare the file against a copy of the file from the previous run to see if they are identical.
If any one is still looking for an answer they can use Oracle Database Change Notification feature coming with Oracle 10g. It requires CHANGE NOTIFICATION system privilege. You can register listeners when to trigger a notification back to the application.
Please use the below statement
select * from all_objects ao where ao.OBJECT_TYPE = 'TABLE' and ao.OWNER = 'YOUR_SCHEMA_NAME'

Resources