Create many transactions for one table in GeneXus - genexus

I'm having a trouble with a GeneXus' Transaction object.
I want to create two Transactions to an Informix table.
Anybody has any idea?

If you want to do a dataview over a existing database you can see how to do here http://wiki.genexus.com/commwiki/servlet/hwikibypageid?6627
If you want to create two transactions over the same table you can see the parallel transaction concept here
http://wiki.genexus.com/commwiki/servlet/hwikibypageid?20209

If two transactions share the same key, they are considered "parallel transactions". To put it bluntly, there are 2 transactions that refer to the same db table.

Related

Impact Oracle DB by frequently creation and Drop of tables

I have a use case where we need to store some data for a specific period(lets say, 10k rows for 5 mins) and since data is huge so can`t store in Java Memory. Please help me to understand which one is the best approach from following and why?
Create a permanent table and introduce a column, which will help me to fetch and drop the rows as per the session.
Create multiple temporary table for each session and drop just after the process.
Thanks in Advance.
Its sound like you just need to make use of GTTs, please check out the below documentation for the same, you dont need to worry about truncating or dropping the table as data will be stored only till the session lasts.
Documentation

Can we persist two different table entity in DynamoDB under one single transaction

I have two tables in Amazon DynamoDB where I have to persist data in a single transaction Using spring boot. if the persistence fails in the second table it should rollback for the first table also.
I have tried looking into AWSLAB-amazon DynamoDB transaction but it only helps for a single table.
Try using the built-in DynamoDB transactions capability. From the limited information you give, it should do what you are looking for across regional tables. Just keep in mind that there is no rollback per se. Either all items in a transaction work or none of them. The internal transaction coordinator handles that for you though.
Now that this feature is out, you should not be looking at the AWSlabs tool most likely.

Non XA transaction for multiple schema's on the same instance

Currently I am using Weblogic with Oracle.
I have one instance of Oracle DB and two legacy schemas so I use tow datasources.
To keep transactionality I use XA but from time to time there are HeuristicExceptions thrown causing some inconsistency on data level
Now because it is the same instance is not possible somehow not to use XA and define a datasource that has access to both schemas ?
In this way i will not use XA anymore and avoid having data inconsitency.
Thanks
Do not use dblink. It is overkill. And also this might not be related to XA. Best solution is to use tables from both schemas from a single datasource. Either prefix tables in your queries by schema name, or create synonyms in one schema pointing onto tables in the other schema.
It is only matter database privileges. No need to deal with XA nor dblinks.
One db user need to have grants to manipulate tables in both schemas.
PS: you can use distributed transactions on connections pointing into the same database. If you insist on it. But in your case, there no need for that.
You can connect one schema and create a DBLink for the other to give access to the second. I think that transaction will work through both schema.
http://docs.oracle.com/cd/B28359_01/server.111/b28310/ds_concepts004.htm

Rolling back multiple transactions with JDBC

Is it possible to rollback multiple already-commited transactions with JDBC?
According to this link here: http://docs.oracle.com/javase/tutorial/jdbc/basics/transactions.html savepoints are only active for the current transaction?
Thanks.
Already committed individual or multiple transactions (unlike savepoints!) are not possible on any databases as far as I know, definitely not on Oracle. Yes, savepoints are relevant only for the current transaction.
I'm not sure what your problem is but if you want to look at old values of a recently committed table you could use SELECT AS OF or similarly, flashback the whole table or even the database.
If you think about it for a while there are lots of constrains while individual transactional rollbacks are sometimes logically impossible without violating a whole lot of data integrity rules...

Auditing in Oracle

I need some help in auditing in Oracle. We have a database with many tables and we want to be able to audit every change made to any table in any field. So the things we want to have in this audit are:
user who modified
time of change occurred
old value and new value
so we started creating the trigger which was supposed to perform the audit for any table but then had issues...
As I mentioned before we have so many tables and we cannot go creating a trigger per each table. So the idea is creating a master trigger that can behaves dynamically for any table that fires the trigger. I was trying to do it but no lucky at all....it seems that Oracle restricts the trigger environment just for a table which is declared by code and not dynamically like we want to do.
Do you have any idea on how to do this or any other advice for solving this issue?
If you have 10g enterprise edition you should look at Oracle's Fine-Grained Auditing. It is definitely better than rolling your own.
But if you have a lesser version or for some reason FGA is not to your taste, here is how to do it. The key thing is: build a separate audit table for each application table.
I know this is not what you want to hear because it doesn't match the table structure you outlined above. But storing a row with OLD and NEW values for each column affected by an update is a really bad idea:
It doesn't scale ( a single update touching ten columns spawns ten inserts)
What about when you insert a record?
It is a complete pain to assemble the state of a record at any given time
So, have an audit table for each application table, with an identical structure. That means including the CHANGED_TIMESTAMP and CHANGED_USER on the application table, but that is not a bad thing.
Finally, and you know where this is leading, have a trigger on each table which inserts a whole record with just the :NEW values into the audit table. The trigger should fire on INSERT and UPDATE. This gives the complete history, it is easy enough to diff two versions of the record. For a DELETE you will insert an audit record with just the primary key populated and all other columns empty.
Your objection will be that you have too many tables and too many columns to implement all these objects. But it is simple enough to generate the table and trigger DDL statements from the data dictionary (user_tables, user_tab_columns).
You don't need write your own triggers.
Oracle ships with flexible and fine grained audit trail services. Have a look at this document (9i) as a starting point.
(Edit: Here's a link for 10g and 11g versions of the same document.)
You can audit so much that it can be like drinking from the firehose - and that can hurt the server performance at some point, or could leave you with so much audit information that you won't be able to extract meaningful information from it quickly, and/or you could end up eating up lots of disk space. Spend some time thinking about how much audit information you really need, and how long you might need to keep it around. To do so might require starting with a basic configuration, and then tailoring it down after you're able to get a sample of the kind of volume of audit trail data you're actually collecting.

Resources