Windows phone database. How to erase table fast? - windows-phone-7

What can i use instead of
db.Schedules.DeleteAllOnSubmit(db.Schedules);
db.SubmitChanges();
For table with 1M records it takes ages.
Can i execute Stored Procedure or any custom SQL somehow?
Thanks!

Stored procedures are not supported on the phone.
What you are trying will take a long time because you have a lot of records to delete.
There are a couple of things you could try instead:
- delete the file the table is in directly
- split (shard) the data across multiple tables so that you don't have to delete so many records at the same time.

Related

Avoiding frequent call to same view inside a Oracle procedure

I have a oracle view it returns 5 million records from different tables and i use this view to insert into different tables using a single procedure, inside this procedure i use this several times and this is affecting the performance, is there any way we can query the view once and later i can use it multiple places?
A view is a stored query; itself, it doesn't contain any data. If its code is complex and fetches data from several tables, using different conditions, aggregations, whatnot, it can take some time to access data.
In your situation, maybe a global (or private; depending on Oracle version you use) temporary table (GTT) would help.
you create it once
at the beginning of the procedure, insert data from the view into it
the rest of the procedure would work with those prepared data
once the session (or transaction; depending on how you set the GTT up) is over, data from the table is lost
the table can be reused next time you run the procedure

Dynamic Audit Trigger

I want to keep logs of all tables into 1 single log table. Suppose if any DML operation is going on any table inside DB. Than that should be logged in 1 single tables.
But there should be a dynamic trigger which will not hard coded the column names for every table.
Is there any solution for this.
Regards,
Somdutt Harihar
"Is there any solution for this"
No. This is not how databases work. Strongly enforced data structures is what they do, and that applies to audit tables just as much as transaction tables.
The reason is quite clear: the time you save not writing audit code specific to each transactional table is the time you will spend writing a query to retrieve the audit records. The difference is, when you're trying to get the audit records out you will have your boss standing over your shoulder demanding to know when you can tell them what happened to the payroll records last month. Or asking how long it will take you to produce that report for the regulators, are you trying to make the company look like a bunch of clowns? You get the picture. This is not where you want to be.
Also, the performance of a single table to store all the changes to all the tables in the database? That is going to be so slow, you have no idea.
The point is, we can generate the auditing code. It is easy to write some SQL which interrogates the data dictionary and produces DDL for the target tables and triggers to populate those tables.
In fact it gets even easier in 11.2.0.4 and later because we can use FLASHBACK DATA ARCHIVE (formerly Oracle Total Recall) to build and maintain such journalling functionality automatically, and query it automatically with the as of syntax. Find out more.
Okay, so technically there is a solution. You could have a trigger on each table which executes some dynamic PL/SQL to interrogate the data dictionary and assembles a piece of JSON which you stuff into your single table. The single table could be partitioned by day range and sub-partitioned by table name (assuming you have licensed the Partitioning option) to mitigate the performance of querying it.
But that is extremely complex. Running dynamic PL/SQL for every DML statement will have a bad effect on performance, which the users will notice. And this still doesn't solve the fundamental problem of retrieving the audit trail when you need it.
To audit DML actions on any table just enable such audit by using following code:
audit insert table, update table, delete table;
All actions with tables will then be logged to sys.dba_audit_object table.
Audit will only log timestamp, user, host and other params, not exact copies of new or old rows.

How to safely update hive external table

I have an external hive table and I would like to refresh the data files on a daily basis. What is the recommended way to do this?
If I just overwrite the files, and if we are unlucky enough to have some other hive queries to execute in parallel against this table, what will happen to those queries? Will they just fail? Or will my HDFS operations fail? Or will they block until the queries complete?
If availability is a concern and space isn't an issue, you can do the following:
Make a synonym for the external table. Make sure all queries use this synonym when accessing the table.
When loading new data, load it to a new table with a different name.
When the load is complete, point the synonym to the newly loaded table.
After an appropriate length of time (long enough for any running queries to finish), drop the previous table.
First of all.. if you are accessing any table it may have two types of locks:
exclusive(if data is getting added) and shared(if data is getting read)..
so if you insert overwrite and add data into the table then at that time if you access the table with other queries, they wont get executed because there will be an exclusive lock on it and once the insert overwrite query completes then you may access the table.
Please refer to the following link:
https://cwiki.apache.org/confluence/display/Hive/Locking

Alternative for procedure concept in Vertica DB

My Scenario:
I have a table in Vertica with 1000 records loaded on particular day say Day 1.
Let the key column be Id .
I need to fake data similarly for 100 days , 1000 records each per day with unique values for the key columnID , for each day.
I heard it is impossible to create procedure in Vertica to do repetitive tasks .
Is there any other way to achieve this?
Vertica does not support Stored Procedures as you have in some DBs. Instead it has User Defined Functions. You write them in Java or whatever but it runs INSIDE the database as if it were an SP. (it also supports External Procedures to run stuff outside the DB but I don't think that's what you want)

Best way to bulk insert data into Oracle database

I am going to create a lot of data scripts such as INSERT INTO and UPDATE
There will be 100,000 plus records if not 1,000,000
What is the best way to get this data into Oracle quickly? I have already found that SQL Loader is not good for this as it does not update individual rows.
Thanks
UPDATE: I will be writing an application to do this in C#
Load the records in a stage table via SQL*Loader. Then use bulk operations:
INSERT INTO SELECT (for example "Bulk Insert into Oracle database")
mass UPDATE ("Oracle - Update statement with inner join")
or a single MERGE statement
To keep It as fast as possible I would keep it all in the database.
Use external tables (to allow Oracle to read the file contents),
and create a stored procedure to do the processing.
The update could be slow, If possible, It may be a good idea to consider creating a new table based on all the records in the old (with updates) then switch the new & old tables around.
How about using a spreadsheet program like MS Excel or LibreOffice Calc? This is how I perform bulk inserts.
Prepare your data in a tabular format.
Let's say you have three columns, A (text), B (number) & C (date). In the D column, enter the following formula. Adjust accordingly.
="INSERT INTO YOUR_TABLE (COL_A, COL_B, COL_C) VALUES ('"&A1&"', "&B1&", to_date ('"&C1&"', 'mm/dd/yy'));"

Resources