My Scenario:
I have a table in Vertica with 1000 records loaded on particular day say Day 1.
Let the key column be Id .
I need to fake data similarly for 100 days , 1000 records each per day with unique values for the key columnID , for each day.
I heard it is impossible to create procedure in Vertica to do repetitive tasks .
Is there any other way to achieve this?
Vertica does not support Stored Procedures as you have in some DBs. Instead it has User Defined Functions. You write them in Java or whatever but it runs INSIDE the database as if it were an SP. (it also supports External Procedures to run stuff outside the DB but I don't think that's what you want)
Related
I have a oracle view it returns 5 million records from different tables and i use this view to insert into different tables using a single procedure, inside this procedure i use this several times and this is affecting the performance, is there any way we can query the view once and later i can use it multiple places?
A view is a stored query; itself, it doesn't contain any data. If its code is complex and fetches data from several tables, using different conditions, aggregations, whatnot, it can take some time to access data.
In your situation, maybe a global (or private; depending on Oracle version you use) temporary table (GTT) would help.
you create it once
at the beginning of the procedure, insert data from the view into it
the rest of the procedure would work with those prepared data
once the session (or transaction; depending on how you set the GTT up) is over, data from the table is lost
the table can be reused next time you run the procedure
I'm looking for a simple way to communicate between two databases, there currently exists a database link between both database.
I want to process a job on database 1 for a batch of records (batch code for each batch of records), once the process has finished on database 1 and all the batches of records have been processed. I want database 2 to see that database 1 has processed a number of batches (batch codes) either by querying a oracle table or an Oracle advanced queue which sits on either database 1 or database 2.
Database 2 will process the batches of records that are on database 1 through a database linked view using each batch code and update the status of that batch to complete.
I want to be able to update the Oracle Advanced Queue or database table of its batch no, progress status ('S' started, 'C' completed), status date
Table name.
batch_records
Table columns
Batch No,
Status,
status date
Questions:
Can this be done by a simple database table rather than a complex Oracle Advanced Queue?
Can a table be updated over a database link?
Are there any examples of this?
To answer your question first:
yes, I believe so
yes, it can. But, if there are many rows involved, it can be pretty slow
probably
Database link is the way to communicate between two databases. If those jobs run on the database 1 (DB1), I'd suggest you to keep it there - in the DB1. Doing stuff over a database link calls for problems of different kinds. Might be slow, you can't do everything over the database link (LOBs, for example). One option is to schedule a job (using DBMS_SCHEDULER or DBMS_JOB (which is quite OK for simple things)). Let the procedure maintain job status in some table (that would be a "simple table" from your 1st question) in DB1 which will be read by the DB2.
How? Do it directly, or create a materialized view which will be refreshed in a scheduled manner (e.g. every morning at 07:00) or on demand (not that good idea) or on commit (once the DB1 procedure does the job and commits changes, materialized view will be refreshed).
If there aren't that many rows involved, I'd probably read the DB1 status table directly, and think of other options later (if necessary).
I want to keep logs of all tables into 1 single log table. Suppose if any DML operation is going on any table inside DB. Than that should be logged in 1 single tables.
But there should be a dynamic trigger which will not hard coded the column names for every table.
Is there any solution for this.
Regards,
Somdutt Harihar
"Is there any solution for this"
No. This is not how databases work. Strongly enforced data structures is what they do, and that applies to audit tables just as much as transaction tables.
The reason is quite clear: the time you save not writing audit code specific to each transactional table is the time you will spend writing a query to retrieve the audit records. The difference is, when you're trying to get the audit records out you will have your boss standing over your shoulder demanding to know when you can tell them what happened to the payroll records last month. Or asking how long it will take you to produce that report for the regulators, are you trying to make the company look like a bunch of clowns? You get the picture. This is not where you want to be.
Also, the performance of a single table to store all the changes to all the tables in the database? That is going to be so slow, you have no idea.
The point is, we can generate the auditing code. It is easy to write some SQL which interrogates the data dictionary and produces DDL for the target tables and triggers to populate those tables.
In fact it gets even easier in 11.2.0.4 and later because we can use FLASHBACK DATA ARCHIVE (formerly Oracle Total Recall) to build and maintain such journalling functionality automatically, and query it automatically with the as of syntax. Find out more.
Okay, so technically there is a solution. You could have a trigger on each table which executes some dynamic PL/SQL to interrogate the data dictionary and assembles a piece of JSON which you stuff into your single table. The single table could be partitioned by day range and sub-partitioned by table name (assuming you have licensed the Partitioning option) to mitigate the performance of querying it.
But that is extremely complex. Running dynamic PL/SQL for every DML statement will have a bad effect on performance, which the users will notice. And this still doesn't solve the fundamental problem of retrieving the audit trail when you need it.
To audit DML actions on any table just enable such audit by using following code:
audit insert table, update table, delete table;
All actions with tables will then be logged to sys.dba_audit_object table.
Audit will only log timestamp, user, host and other params, not exact copies of new or old rows.
I have oracle 10g database in my application. How to find out that how many times the a particular records has been accessed in particular table.
In general, to do that, you need to access records in a table though stored procedure, not through SELECT statement.
But, here is how it could be simplified:
you add a requirement that any SELECT to your table should have a function call:
select yourtable.* from yourtable
where yourfunct('yourtable', yourtable.key) = 'done'
this could be easily done through view plus revoking permissions to read the table itself
in your function, you either save table/key pair inside a table in a package (you don't need to start a transaction to do that) or you start an autonomous transaction and write into a real table.
writing into a variable in a package is not thead safe, but it is much faster.
creating a transaction is slooooooower, but it will garantie a result.
My personal preference would be to question an original task. Maybe it would be enough to create sort of 'log entry' table where requests for data are recorded.
What can i use instead of
db.Schedules.DeleteAllOnSubmit(db.Schedules);
db.SubmitChanges();
For table with 1M records it takes ages.
Can i execute Stored Procedure or any custom SQL somehow?
Thanks!
Stored procedures are not supported on the phone.
What you are trying will take a long time because you have a lot of records to delete.
There are a couple of things you could try instead:
- delete the file the table is in directly
- split (shard) the data across multiple tables so that you don't have to delete so many records at the same time.