MySQL has an "on update" feature e.g.
CREATE TABLE t1 (
ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
dt DATETIME DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
);
I need a similar behavior in snowflake where I can update a column say "lastupdated" every time there is an update on the row.
Is this possible in snowflake?
Checkout "Snowflake Stream" option. You can create a stream on top of your table, and stream will have a couple of columns which will give you exactly what you're looking for!
Its not very well explored feature unfortunately!
In other database implementations this is achieved through triggers.
Snowflake doesn't support triggers.
I wonder if you could create a stored procedure in Snowflake to accomplish what you are trying to do.
If you are trying to update a row with a timestamp
or you could just update the field in your copy or replace statement.
Similarly done this way: https://community.snowflake.com/s/question/0D50Z00006uSiEKSA0/syntax-for-adding-a-column-with-currenttimestamp-default-constraint
Example 1:
> UPDATE <target_table>
SET Lastupdate = current_timestamp()
[ FROM <additional_tables> ]
[ WHERE <condition> ]
Example 2:
>create or replace table x(i int, t timestamp default current_timestamp());
>insert into x(i) values(1);
borrowed from this link
as noted above, triggers are not supported - you'll have to do this explicitly in sql. note that your process should also be handling data in some type of batches; if you are trying to do anything a single record at a time in snowflake - at least for any real volume - you're going to have a bad time.
That's a pretty nice feature request. I've been using MS SQL Server for years... any "updated" columns were either done in the code or, as already indicated, using triggers.
I checked the snowflake docs and found this reference, which only applies to INSERTs and CTAS:
DEFAULT ... or AUTOINCREMENT ...
Specifies whether a default value is automatically inserted in the column if a value is not explicitly specified via an **INSERT or CREATE TABLE AS SELECT** statement:
https://docs.snowflake.net/manuals/sql-reference/sql/create-table.html
you can do something like this:
CREATE or REPLACE TABLE t1 (
ts TIMESTAMP_LTZ(9) as CURRENT_TIMESTAMP,
dt DATE as CURRENT_DATE,
NAME VARCHAR(200)
);
insert into t1 (NAME) VALUES ('Jerry Smith');
insert into t1 (NAME) VALUES ('Gazorpazorp Smith');
select * from t1;
just means that your values change every time you select from the table
You can use a combination of streams, external or internal stages and eventing , to record the DML CRUD changes. actually this combination is very elegant because your simulated triggers can trigger external events.
1) Create a stream
create stream supplierStream on table SupplierTable before(statement => 'yourGUID `statementID');`
2) Configure your event grid topic if using Azure. let's say your topic name is "SupplierTopic"
MS event grid
3) create your notification integration
CREATE NOTIFICATION INTEGRATION supplierIntegration
ENABLED = true
TYPE = QUEUE
NOTIFICATION_PROVIDER = AZURE_STORAGE_QUEUE
4) create your stage
create or replace stage supplierStage
url='azure://your account container ID'
storage_integration = SupplierIntegration;
5) consume the event grid event , in server or serverless system.
Have you tried MERGE combined with UPDATE?
https://docs.snowflake.com/en/sql-reference/sql/merge.html
Related
I am new to DCN, can I use it to detect updates on a column in my table along with inserts in that table ?
I am referring to this
Yes, you can - Change Notifications made for that. You need to register СN listener with query to watch (it can a whole table select * from your_table or part of it select column1 from your_table where column2='xxx') and callback function . You should understand that it is async mechanism changes will not detect immediately, but after some time.
Your documentation's link shows way how to implement it using JDBC.
Read it if you want to use Oracle PL/SQL for that.
I want that every time a data is input in oracle table the date and time must automatically be updated in one of the column named 'CREATION_DATE'.
Setting default value of SYSDATE is more efficient than a trigger. As helpc mentioned, a default value can be overridden if NULL is explicitly provided in the INSERT. If you don't intend to pass date time thru application at all, you can define the column as NOT NULL with a default as sysdate .
A trigger will do what you want. I think something like this is what you are looking for:
CREATE OR REPLACE TRIGGER date_trigger
AFTER INSERT ON your_table
FOR EACH ROW
WHEN (new.your_table> 0)
BEGIN
:NEW.CREATION_DATE:= SYSDATE;
END;
/
Depending on your needs I usually like to add both a create_date and an update_date column to pick up timestamps for changes that may occur later.
I am using mybatis to perform a massive batch insert on an oracle DB.
My process is very simple: I am taking records from a list of files and inserting them into a specific table after performing some checks on the data.
-Each file contains an average of 180.000 records and I can have more than one file.
-Some records can be present in more than one file.
-A record is identical to another one if EVERY column matches, in other words I cannot simply perform a check on a specific field. And I have defined a constraint in my DB which makes sure this condition is satisfied.
To put it simply I want to just ignore the constraint exception Oracle will give to me in case that constraint is violated.
Record is not present?-->insert
Record is already present?--> go ahead
is this possible with mybatis?Or can I accomplish something at DB level?
I have control on both Application Server and DB so please tell me what's the most efficient way to accomplish this task (even though I'd like to avoid being too much DB dependant...)
of course, I'd like to avoid performing a select* before each insertion...given the number of records I am dealing with it would ruin my application's performances
Use the IGNORE_ROW_ON_DUPKEY_INDEX hint:
insert /*+ IGNORE_ROW_ON_DUPKEY_INDEX(table_name index_name) */
into table_name
select * ...
I'm not sure about JDBC, but at least in OCI it is possible. With batch operations you pass vectors as bind variables and you also get back vector(s) of returned IDs and also a vector of error codes.
You can also use MERGE on database server side together with custon collection types. Something like:
merge into t
using ( select * from TABLE(:var) v)
on ( v.id = t.id )
when not matched then insert ...
Where :var is bind variable of SQL type: TABLE OF <recordname>
The word "TABLE" is a construct used to cast from bind variable into a table.
Another option is to use SQL error loggin clause:
DBMS_ERRLOG.create_error_log (dml_table_name => 't');
insert into t(...) values(...) log errors reject limit unlimited;
Then after the load you will have to truncate error loging table err$_t;
another option would be to use external tables
It looks that any solution is quite a lot work to do, when compared to using sqlldr.
Ignore error with error table
insert
into table_name
select *
from selected_table
LOG ERRORS INTO SANJI.ERROR_LOG('some comment' )
REJECT LIMIT UNLIMITED;
and error table schema is :
CREATE GLOBAL TEMPORARY TABLE SANJI.ERROR_LOG (
ora_err_number$ number,
ora_err_mesg$ varchar2(2000),
ora_err_rowid$ rowid,
ora_err_optyp$ varchar2(2),
ora_err_tag$ varchar2(2000),
n1 varchar2(128)
)
ON COMMIT PRESERVE ROWS;
Friend, I have question about cascade trigger.
I have 2 tables, table data that has 3 attributes (id_data, sum, and id_tool), and table tool that has 3 attributes (id_tool, name, sum_total). table data and tool are joined using id_tool.
I want create trigger for update info sum_total. So , if I inserting on table data, sum_total on table tool where tool.id_tool = data.id_tool will updating too.
I create this trigger, but error ora-04090.
create or replace trigger aft_ins_tool
after insert on data
for each row
declare
v_stok number;
v_jum number;
begin
select sum into v_jum
from data
where id_data= :new.id_data;
select sum_total into v_stok
from tool
where id_tool=
(select id_tool
from data
where id_data= :new.id_data);
if inserting then
v_stok := v_stok + v_jum;
update tool
set sum_total=v_stok
where id_tool=
(select id_tool
from data
where id_data= :new.id_data);
end if;
end;
/
please give me opinion.
Thanks.
The ora-04090 indicates that you already have an AFTER INSERT ... FOR EACH ROW trigger on that table. Oracle doesn't like that, because the order in which the triggers fire is unpredictable, which may lead to unpredictable results, and Oracle really doesn't like those.
So, your first step is to merge the two sets of code into a single trigger. Then the real fun begins.
Presumably there is only one row in data matching the current value of id_data (if not your data model is rally messed up and there's no hope for your situation). Anyway, that means the current row already gives you access to the values of :new.sum and :new.id_tool. So you don't need those queries on the data table: removing those selects will remove the possibility of "mutating table" errors.
As a general observation, maintaining aggregate or summary tables like this is generally a bad idea. Usually it is better just to query the information when it is needed. If you really have huge volumes of data then you should use a materialized view to maintain the summary, rather than hand-rolling something.
I have a self referencing table in Oracle 9i, and a view that gets data from it:
CREATE OR REPLACE VIEW config AS
SELECT c.node_id,
c.parent_node_id,
c.config_key,
c.config_value,
(SELECT c2.config_key
FROM vera.config_tab c2
WHERE c2.node_id = c.parent_node_id) AS parent_config_key,
sys_connect_by_path(config_key, '.') path,
sys_connect_by_path(config_key, '->') php_notation
FROM config_tab c
CONNECT BY c.parent_node_id = PRIOR c.node_id
START WITH c.parent_node_id IS NULL
ORDER BY LEVEL DESC
The table stores configuration for PHP application. Now I need to use same config in oracle view.
I would like to select some values from the view by path, but unfortunately this takes 0,15s so it's unacceptable cost.
SELECT * FROM some_table
WHERE some_column IN (
SELECT config_value FROM config_tab WHERE path = 'a.path.to.config'
)
At first I thought of a function index on sys_connect_by_path, but it is impossible, as it needs also CONNECT BY clause.
Any suggestions how can I emulate an index on the path column from the 'config' view?
If your data doesn't change frequently in the config_tab, you could use a materialized view with the same query as your view. You could then index the path column of your materialized view.
CREATE MATERIALIZED VIEW config
REFRESH COMPLETE ON DEMAND
AS <your_query>;
CREATE INDEX ix_config_path ON config (path);
Since this is a complex query, you would need to do a full refresh of your materialized view every time the base table is updated so that the data in the MV doesn't become stale.
Update
Your column path will be defined as a VARCHAR2(4000). You could limit the size of this column in order to index it. In your query, replace sys_connect_by_path(...) by SUBSTR(sys_connect_by_path(..., 1, 1000) for example.
You won't be able to use REFRESH ON COMMIT on a complex MV. A simple trigger won't work. You will have to modify the code that updates your base table to include a refresh somehow, I don't know if this is practical in your environment.
You could also use a trigger that submits a job that will refresh the MV. The job will execute once you commit (this is a feature of dbms_job). This is more complex since you will have to check that you only trigger the job once per transaction (using a package variable for example). Again, this is only practical if you don't update the base table frequently.