I am trying to put pandas table like this into my Oracle Database.
My pandas table looks like this
this is my script to insert the data
INSERT INTO DATA
(NO_TRANSMITTER,ZONA,STATUS_PELABUHAN,TO_DATE(REPORTDATE,'YYYY-MM-DD HH24:MI:SS'),STATUS)
VALUES(:1,:2,:3,:4,:5)
It gets result like this
There is a problem with Oracle ORA-00917: missing comma
I don't know where comma needed to. Thanks for the help.
Put TO_DATE into the VALUES clause and not into the column identifier list:
INSERT INTO DATA (
NO_TRANSMITTER,
ZONA,
STATUS_PELABUHAN,
REPORTDATE,
STATUS
) VALUES (
:1,
:2,
TO_DATE(:3,'YYYY-MM-DD HH24:MI:SS'),
:4,
:5
)
Or, if the value in Pandas, is already stored as a datetime (and not a string) then simplify it to:
INSERT INTO DATA (
NO_TRANSMITTER,
ZONA,
STATUS_PELABUHAN,
REPORTDATE,
STATUS
) VALUES (
:1,
:2,
:3,
:4,
:5
)
Related
At a given time I stored the result of the following ORACLE SQL Query :
SELET col , TO_CHAR( LOWER( STANDARD_HASH( col , 'MD5' ) ) AS hash_col FROM MyTable ;
A week later, I executed the same query on the same data ( same values for column col ).
I thought the resulting hash_col column would have the same values as the values from the former execution but it was not the case.
Is it possible for ORACLE STANDARD_HASH function to deliver over time the same result for identical input data ?
It does if the function is called twice the same day.
All we have about the data changing (or not) and the hash changing (or not) is your assertion.
You could create and populate a log table:
create table hash_log (
sample_time timestamp,
hashed_string varchar2(200),
hashed_string_dump varchar2(200),
hash_value varchar2(200)
);
Then on a daily basis:
insert into hash_log values
(select systimestamp,
source_column,
dump(source_column),
STANDARD_HASH(source_column , 'MD5' )
from source_table
);
Then, to spot changes:
select distinct hashed_string ||
hashed_string_dump ||
hash_value
from hash_log;
I'm facing a problem when I try to use LAG function on CLOB column.
So let's assume we have a table
create table test (
id number primary key,
not_clob varchar2(255),
this_is_clob clob
);
insert into test values (1, 'test1', to_clob('clob1'));
insert into test values (2, 'test2', to_clob('clob2'));
DECLARE
x CLOB := 'C';
BEGIN
FOR i in 1..32767
LOOP
x := x||'C';
END LOOP;
INSERT INTO test(id,not_clob,this_is_clob) values(3,'test3',x);
END;
/
commit;
Now let's do a select using non-clob columns
select id, lag(not_clob) over (order by id) from test;
It works fine as expected, but when I try the same with clob column
select id, lag(this_is_clob) over (order by id) from test;
I get
ORA-00932: inconsistent datatypes: expected - got CLOB
00932. 00000 - "inconsistent datatypes: expected %s got %s"
*Cause:
*Action:
Error at Line: 1 Column: 16
Can you tell me what's the solution of this problem as I couldn't find anything on that.
The documentation says the argument for any analytic function can be any datatype but it seems unrestricted CLOB is not supported.
However, there is a workaround:
select id, lag(dbms_lob.substr(this_is_clob, 4000, 1)) over (order by id)
from test;
This is not the whole CLOB but 4k should be good enough in many cases.
I'm still wondering what is the proper way to overcome the problem
Is upgrading to 12c an option? The problem is nothing to do with CLOB as such, it's the fact that Oracle has a hard limit for strings in SQL of 4000 characters. In 12c we have the option to use extended data types (providing we can persuade our DBAs to turn it on!). Find out more.
Some of the features may not work properly in SQL when using CLOBs(like DISTINCT , ORDER BY GROUP BY etc. Looks like LAG is also one of them but, I couldn't find anywhere in docs.
If your values in the CLOB columns are always less than 4000 characters, you may use TO_CHAR
select id, lag( TO_CHAR(this_is_clob)) over (order by id) from test;
OR
convert it into an equivalent SELF JOIN ( may not be as efficient as LAG )
SELECT a.id,
b.this_is_clob AS lagging
FROM test a
LEFT JOIN test b ON b.id < a.id;
Demo
I know this is an old question, but I think I found an answer which eliminates the need to restrict the CLOB length and wanted to share it. Utilizing CTE and recursive subqueries, we can replicate the lag functionality with CLOB columns.
First, let's take a look at my "original" query:
WITH TEST_TABLE AS
(
SELECT LEVEL ORDER_BY_COL,
TO_CLOB(LEVEL) AS CLOB_COL
FROM DUAL
CONNECT BY LEVEL <= 10
)
SELECT tt.order_by_col,
tt.clob_col,
LAG(tt.clob_col) OVER (ORDER BY tt.order_by_col)
FROM test_table tt;
As expected, I get the following error:
ORA-00932: inconsistent datatypes: expected - got CLOB
Now, lets look at the modified query:
WITH TEST_TABLE AS
(
SELECT LEVEL ORDER_BY_COL,
TO_CLOB(LEVEL) AS CLOB_COL
FROM DUAL
CONNECT BY LEVEL <= 10
),
initial_pull AS
(
SELECT tt.order_by_col,
LAG(tt.order_by_col) OVER (ORDER BY tt.order_by_col) AS PREV_ROW,
tt.clob_col
FROM test_table tt
),
recursive_subquery (order_by_col, prev_row, clob_col, prev_clob_col) AS
(
SELECT ip.order_by_col, ip.prev_row, ip.clob_col, NULL
FROM initial_pull ip
WHERE ip.prev_row IS NULL
UNION ALL
SELECT ip.order_by_col, ip.prev_row, ip.clob_col, rs.clob_col
FROM initial_pull ip
INNER JOIN recursive_subquery rs ON ip.prev_row = rs.order_by_col
)
SELECT rs.order_by_col, rs.clob_col, rs.prev_clob_col
FROM recursive_subquery rs;
So here is how it works.
I create the TEST_TABLE, this really is only for the example as you should already have this table somewhere in your schema.
I create a CTE of the data I want to pull, plus a LAG function on the primary key (or a unique column) in the table partitioned and ordered in the same way I would have in my original query.
Create a recursive subquery using the initial row as the root and descending row by row joining on the lagged column. Returning both the CLOB column from the current row and the CLOB column from its parent row.
I'm trying to parse line into columns in control file.
I get "Field in data file exceeds maximum length"
My control file:
OPTIONS (
ERRORS = 1,
DIRECT=TRUE,
LOAD=10
)
load data
APPEND
into table table_1
fields terminated by "#x000A"
(
Column0 BOUNDFILLER,
Column1 "SUBSTR(:Column0, 1, 10)"
)
Table:
create table table_1 (
Column0 VARCHAR2(2000)
Column1 VARCHAR2(124)
);
It looks like it happens because length of each row is more that 2000 but I checked the file and it's less that 1000.
So why I get this error?
The error is here. Column0 BOUNDFILLER, this line is equal Column0 BOUNDFILLER char(255),.
char(255) is default value.
You are trying to put 1000 into variable with space only for 255.
Solution is Column0 BOUNDFILLER char(2000) ,
I have a PL/SQL procedure that currently gets data from an XML service and only does inserts.
xml_data := xmltype(GET_XML_F('http://test.example.com/mywebservice');
--GET_XML_F gets the XML text from the site
INSERT INTO TEST_READINGS (TEST, READING_DATE, CREATE_DATE, LOCATION_ID)
SELECT round(avg(readings.reading_val), 2),
to_date(substr(readings.reading_dt, 1, 10),'YYYY-MM-DD'), SYSDATE,
p_location_id)
FROM XMLTable(
XMLNamespaces('http://www.example.com' as "ns1"),
'/ns1:test1/ns1:series1/ns1:values1/ns1:value'
PASSING xml_data
COLUMNS reading_val VARCHAR2(50) PATH '.',
reading_dt VARCHAR2(50) PATH '#dateTime') readings
GROUP BY substr(readings.reading_dt,1,10), p_location_id;
I would like to be able to insert or update the data using a merge statement in the event that it needs to be re-run on the same day to find added records. I'm doing this in other procedures using the code below.
MERGE INTO TEST_READINGS USING DUAL
ON (LOCATION_ID = p_location_id AND READING_DATE = p_date)
WHEN NOT MATCHED THEN INSERT
(TEST_reading_id, site_id, test, reading_date, create_date)
VALUES (TEST_readings_seq.nextval, p_location_id,
p_value, p_date, SYSDATE)
WHEN MATCHED THEN UPDATE
SET TEST = p_value;
The fact that I'm pulling it from an XMLTable is throwing me off. Is there way to get the data from the XMLTable while still using the (much cleaner) merge syntax? I would just delete the data beforehand and re-import or use lots of conditional statements, but I would like to avoid doing so if possible.
Can't you simply put your SELECT into MERGE statement?
I believe, this should look more less like this:
MERGE INTO TEST_READINGS USING (
SELECT
ROUND(AVG(readings.reading_val), 2) AS test
,TO_DATE(SUBSTR(readings.reading_dt, 1, 10),'YYYY-MM-DD') AS reading_date
,SYSDATE AS create_date
,p_location_id AS location_id
FROM
XMLTable(
XMLNamespaces('http://www.example.com' as "ns1")
,'/ns1:test1/ns1:series1/ns1:values1/ns1:value'
PASSING xml_data
COLUMNS
reading_val VARCHAR2(50) PATH '.',
reading_dt VARCHAR2(50) PATH '#dateTime'
) readings
GROUP BY
SUBSTR(readings.reading_dt,1,10)
,p_location_id
) readings ON (
LOCATION_ID = readings.location_id
AND READING_DATE = readings.reading_date
)
WHEN NOT MATCHED THEN
...
WHEN MATCHED THEN
...
;
I have an insert statement that takes all its values from local variables or literal constants:
INSERT INTO SampleTestLimits
(AuditNumber
,LimitNumber
,ComponentRow
,ComponentColumn
--etc
)
SELECT 1
,varLimitNumber
,varComponentRow
,varComponentColumn
--etc
;
but the problem is that I get an error "Missing FROM".
I guessed that this is because there is no table associated with the Select, and I tried ending the query with
FROM DUAL;
but that doesn't work (possibly because DUAL is a single row, single column pseudo-table, or so I understand).
I can do this quite easily in Sql Server, but how can I do what I want to achieve in Oracle?
TIA.
If you want to insert data from local variables you should use the VALUES clause:
INSERT INTO SampleTestLimits
(AuditNumber
,LimitNumber
,ComponentRow
,ComponentColumn
--etc
)
VALUES
(1
,varLimitNumber
,varComponentRow
,varComponentColumn
--etc
);