I have a database schema which is identical in files 1.sqlitedb through n.sqlitedb. I use a view to 'merge' all of the databases. My question is: when i insert into the view, into which database does the data get inserted into? Is there any way to control which gets the data? The way that i need to split the data depends on the data itself. Essentially, i use the first letter of a field to determine the file that it gets inserted into. Any help would be appreciated. Thanks!
Writing to views is NOT supported for SQLite like it is with other dbs.
http://www.sqlite.org/omitted.html
In order to achieve similar functionality, one must create triggers to do the necessary work.
We need to implement instead of trigger on the view (VIEW_NAME) . So when insert/update happens view . we can insert update underlying object (TABLE_NAME) in the trigger body.
CREATE TRIGGER trigger_name instead of INSERT on VIEW_NAME
BEGIN
insert into TABLE_NAME ( col1 ,col2 ) values ( :new.col1, :new.col2);
END;
I'm not sure I understand your question, but have you looked into using the ATTACH DATABASE command? It allows you connect separate database files to a single database. You can control INSERTs into a specific database by prefixing the database name (INSERT INTO db1.Table).
http://www.sqlite.org/lang_attach.html
Related
I have an oracle trigger that needs to copy values from the updated table to another table.
The problem is that the columns aren't known when the trigger is created. Part of this system allows the table schema to be updated by the application. (don't ask).
Essentially what I want to do is pivot the table to another table.
I have a stored procedure that will do the pivot, but I can't call it as part of the trigger because it does a select on the table being updated. Causing a "mutating" error.
What would be ideal would be to create a dynamic scripts that reads all the column names from user_tab_cols for the updated table, and reads the value from the :new object.
But of course...I can't :)
:NEW doesn't exist at the point the dynamic script is executed. So something like the following would fail:
EXECUTE IMMEDIATE `insert into pivotTable values(:NEW.' || variableWithColumnName ||')';
So, I'm stuck.
I can't read from the table that was updated, and I can't read the value that was updated from the :NEW object.
Is there anyway to accomplish this other than rebuilding the trigger each time the schema is changed?
No. You'll need to rebuild the trigger whenever the table changes.
If you want to get really involved, you could write a procedure that dynamically generated the DDL to CREATE OR REPLACE the trigger by reading user_tab_columns. You could then create a DDL trigger that fired when the table was altered, submitted a job via dbms_job that called the procedure to recreate the trigger. That works but it's a rather large number of moving parts which means that it can fail in all sorts of subtle and spectacular ways particularly if the application that is making schema changes on the fly decides to add columns in the middle of the day.
I need sone help in PL/SQl.
So my problem is, the following problem:
There is a table called: temp_table and I need to create a temp_table without drop/truncate option. It needs because all the time a table's data changing.
So I know its weird, but this is necessary for my daily job.
The script work like this:
The script does a text import to table, and the table is given. It use a dblink to connect the database. It works, but all the time I have to use DROP. But I need ( if its possible) to create an existing table without drop/truncate.
Can someone help me?
Thanks a lot.
Sorry for no sql code, but i think it doesn't necessary.
I think the concept you want is the external table. With external tables the data resides in OS files, such as CSVs. This allows us to swap data sets without dropping the table.
Find out more.
I take it you want to drop the table because you want to reload it, but you also want there to be as close to constant up-time as possible?
I would create two temp tables. You already have one called:
temp_table
Create another called:
temp_table_new
Load your new data into temp_table_new, then run a rename on it like so:
RENAME TABLE temp_table TO temp_table_old
temp_table_new TO temp_table
Then
drop table temp_table_old
This will be super fast, have very little downtime, and allow you to have the functionality you've described.
I want to create a table (lets say table_copy) which has same columns as other table (lets call it table_original) in Oracle database, so the query will be like this :
create table table_copy as (select * from table_original where 1=0);
This will create a table, but the constraints of table_original are not copied to table_copy, so what should be done in this case?
Only NOT NULL constraints are copied using Create Table As Syntax (CTAS). Others should be created manually.
You might however query data dictionary view to see the definitions of constraints and implement them on your new table using PL/SQL.
The other tool that might be helpful is Oracle Data Pump. You could import the table using REMAP_TABLE option specifying the name for the new table.
Use a database tool to extract the DDL needed for the constraints (SQL Developer does the job). Edit the resulting script to match the name of the new class.
Execute the script.
If you need to do this programmatically you can use a statement like this:
DBMS_METADATA.GET_DDL('TABLE','PERSON') from DUAL;
I am going to create a lot of data scripts such as INSERT INTO and UPDATE
There will be 100,000 plus records if not 1,000,000
What is the best way to get this data into Oracle quickly? I have already found that SQL Loader is not good for this as it does not update individual rows.
Thanks
UPDATE: I will be writing an application to do this in C#
Load the records in a stage table via SQL*Loader. Then use bulk operations:
INSERT INTO SELECT (for example "Bulk Insert into Oracle database")
mass UPDATE ("Oracle - Update statement with inner join")
or a single MERGE statement
To keep It as fast as possible I would keep it all in the database.
Use external tables (to allow Oracle to read the file contents),
and create a stored procedure to do the processing.
The update could be slow, If possible, It may be a good idea to consider creating a new table based on all the records in the old (with updates) then switch the new & old tables around.
How about using a spreadsheet program like MS Excel or LibreOffice Calc? This is how I perform bulk inserts.
Prepare your data in a tabular format.
Let's say you have three columns, A (text), B (number) & C (date). In the D column, enter the following formula. Adjust accordingly.
="INSERT INTO YOUR_TABLE (COL_A, COL_B, COL_C) VALUES ('"&A1&"', "&B1&", to_date ('"&C1&"', 'mm/dd/yy'));"
was wondering if anyone had any insight on creating an audit trail process in VB6?
I have an application that gets populated with existing data with the use of 3 or 4 classes. The user can then modify any data they wish on this application. Then the data is saved into tables used for a queue. Basically exact copies of the tables the data came from. My problem is I need to create an audit trail.
What is the best practice for this? Compare every control (text box, radio, check box) on the application which is around 100? Or can I utilize the text_changed event of the text boxes? Really have no idea where to start on this one.
Oh and to make it fun, using a Pervasive DB v9.
Thanks for any help.
Cheers
This should always be done inside the DB.
Something like this (cribbed in part from post to the pervasive forum, I haven't actually used Pervasive):
create trigger insTrig
before insert on table1
referencing new as new_rec
for each row
insert into table2 values (new_rec.co1,new_rec.col2,new_rec.col3,...)#
create trigger delTrig
before delete on table1
referencing old as new_rec
for each row
insert into table2 values (new_rec.co1,new_rec.col2,new_rec.col3,...)#
create trigger updTrig
after update on table1
referencing new as new_rec
for each row
insert into table2 values (new_rec.co1,new_rec.col2,new_rec.col3,...)#