I have set up my class to control access using ACLs. Only the creator of the object can view or edit the object. There are many thousands of these objects in use in a production environment.
I have a new requirement to, under a certain circumstance, allow a user to remove another user's objects. The simplest analogy is to imagine an admin feature that lets moderators remove all comments by a certain abusive user.
Since the client cannot do this, I am defining a cloud function to handle it, which will be able to use the master key. I pass the user ID from the client to the cloud function, and it should remove all comments by this user.
However, the cloud function is not able to find the comments since they are tied to the user only by ACL. As far as I can tell, it is not possible to query by ACL. Is this accurate?
What is the correct approach here? Do I need an additional column besides the ACL to identify the commenter a second time, simply so I can query by it? This seems duplicative. I will also need to update the many existing records, copying the user specified in the ACL into the new column. Is this even possible?
Or is there some way to build an ACL for use by the cloud function and use it instead of the master key so that the query searches as though it were the user in question?
My final (last resort) suggestion is to fetch all of the objects and then iterate over them checking the ACLs. This is obviously a pretty poor solution for scale and performance since I will need to fetch potentially hundreds of thousands to items to check them all.
I need to insert a field in the middle of current fields of a database table. I'm currently doing this in VB6 but may get the green light to do this in .net. Anyway I'm wondering since Access gives you the ability to "insert" fields in the table is there a way to do this in ADOX? If I had to I could step back and use DAO, but not sure how to do it there either.
If yor're wondering why I want to do this applications database has changed over time and I'm being asked to create Upgrade program for some of the installations with older versions.
Any help would be great.
This should not be necessary. Use the correct list of fields in your queries to retrieve them in the required order.
BUT, if you really need to do that, the only way i know is to create a new table with the fields in the required order, read the data from the old table into the new one, delete the old table and rename the new table as the old one.
I hear you: in Access the order of the fields is important.
If you need a comprehensive way to work with ADOX, your go to place is Allen Browne's website. I have used it to from my novice to pro in handling Access database changes. Here it is: www.AllenBrowne.com. Go to Access Tips then scroll down to ADOX Code.
That is also where I normally refer people with doubts about capabilities of Access as a database :)
In your case, you will juggle through creating a new table with the new field in the right position, copying data to the new table, applying properties to the fields, deleting original table, renaming the new table to the required (original) name.
That is the correct order. Do not apply field properties before copying the data. Some indexes and key properties may not be applied when the fields already have data.
Over time, I have automated this so I just run an application to do detect and implement the required changes for me. But that took A LOT of work-weeks.
My task is to make a trigger which will fire when our programmers create, alter, replace or delete triggers in database. It must log their changes to 2 datatables which I made similar to SYS.trigger$ table and added some extra info about user who made changes to them. I copied the principles of logging from already existing audit capability in ERP-system named Galaktika or Galaxy to be simple. However, I encountered a well-famous problem ORA-04089: no one can create triggers on system tables and stuck with it.
Now I'm looking for a way to gently modify my trigger according to database rules. Here is the original code:
CREATE OR REPLACE TRIGGER MRK_AlTrigger$
BEFORE DELETE OR INSERT OR UPDATE
ON SYS.TRIGGER$
REFERENCING NEW AS New OLD AS Old
FOR EACH ROW
DECLARE
Log_Rec MRK_TRIGGERS_LOG_HEADER.NREC%TYPE;
BEGIN
INSERT INTO MRK_TRIGGERS_LOG_HEADER (DATEOFCHANGE,
USERCODE,
OPERATION,
OBJ#)
VALUES (
SYSDATE,
UID,
CASE
WHEN INSERTING THEN 0
WHEN UPDATING THEN 1
WHEN DELETING THEN 2
END,
CASE
WHEN INSERTING OR UPDATING THEN :new.OBJ#
ELSE :old.OBJ#
END)
RETURNING NRec
INTO Log_Rec;
IF INSERTING OR UPDATING
THEN
INSERT INTO MRK_TRIGGERS_LOG_SPECIF (LOGLINK,
OBJ#,
TYPE#,
UPDATE$,
INSERT$,
DELETE$,
BASEOBJECT,
REFOLDNAME,
REFNEWNAME,
DEFINITION,
WHENCLAUSE,
ACTION#,
ACTIONSIZE,
ENABLED,
PROPERTY,
SYS_EVTS,
NTTRIGCOL,
NTTRIGATT,
REFPRTNAME,
ACTIONLINENO)
VALUES (Log_Rec,
:new.OBJ#,
:new.TYPE#,
:new.UPDATE$,
:new.INSERT$,
:new.DELETE$,
:new.BASEOBJECT,
:new.REFOLDNAME,
:new.REFNEWNAME,
:new.DEFINITION,
:new.WHENCLAUSE,
:new.ACTION#,
:new.ACTIONSIZE,
:new.ENABLED,
:new.PROPERTY,
:new.SYS_EVTS,
:new.NTTRIGCOL,
:new.NTTRIGATT,
:new.REFPRTNAME,
:new.ACTIONLINENO);
END IF;
EXCEPTION
WHEN OTHERS
THEN
-- Consider logging the error and then re-raise
RAISE;
END MRK_AlTrigger$;
/
I can also provide MRK_TRIGGERS_LOG_HEADER and MRK_TRIGGERS_LOG_SPECIF DDL, but think it is not necessary. So to make summary, here are the questions I have:
How do I modify the above source to the syntax CREATE OR REPLACE TRIGGER ON DATABASE?
Am I inventing a wheel doing this? Is there any common way to do such things? (I noticed that some tables have logging option, but consider it is for debugging purposes)
Any help will be appreciated!
UPD: I came to decision (thanks to APC) that it is better to hold different versions of code in source control and record only revision number in DB, but dream about doing this automatically.
"We despaired to appeal to our programmers' neatness so my boss
requires that there must be strong and automatic way to log changes.
And to revert them quickly if we need."
In other words, you want a technical fix for what is a political problem. This does not work. However, if you have your boss's support you can sort it out. But it will get messy.
I have been on both sides of this fence, having worked as developer and development DBA. I know from bitter experience how bad it can be if the development database - schemas, configuration parameters, reference data, etc - are not kept under control. Your developers will feel like they are flying right now, but I guarantee you they are not tracking all the changes they make in script form . So their changes are not reversible or repeatable, and when the project reaches UAT the deployment will most likely be a fiasco (buy me a beer and I'll tell you some stories).
So what to do?
Privileged access
Revoke access to SYSDBA accounts and application schema accounts from developers. Apart from anything else you may find parts of the application start to rely on privileged accesses and/or hard-coded passwords, and those are Bad Things; you don't want to include those breaches in Production.
As your developers have got accustomed to having such access this will be highly unpopular. Which is why you need your boss's support. You also must have a replacement approach in place, so leave this action until last. But make no mistake, this is the endgame.
Source control
Database schemas are software too. They are built out of programs, just like the rest of the application, only the source code is DDL and DML scripts not C# or Java. These scripts can be controlled in SVN as with any other source code.
How to organise it in source control? That can be tricky. So recognise that you have three categories of scripts:
Schema scripts which deploy objects
Configuration scripts which insert reference data, manage system parameters, etc
Build scripts which call the other scripts in the right order
Managing the schema scripts is the hardest thing to get right. I suggest you use separate scripts for each object. Also, have separate scripts for table, indexes and constraints. This means you can build all the tables without needing to arrange them in dependency order.
Handling change
The temptation will be to just control a CREATE TABLE statement (or whatever). This is a mistake. In actuality changes to the schema are just as likely to add, drop or modify columns as to introduce totally new objects. Store a CREATE TABLE statement as a baseline, then manage subsequent changes as ALTER TABLE statements.
One file for CREATE TABLE and subsequent ALTER TABLE commands, or separate ones? I'm comfortable having one script: I don't mind if a CREATE TABLE statement fails when I'm expecting the table to already be there. But this can be confusing if others will be running the scripts in say Production. So have a baseline script then separate scripts for applying changes. One alter script per object per time-box is a good compromise.
Changes from developers consist of
alter table script(s) to apply the change
a mirrored alter table script(s) to reverse the change
other scripts, e.g. DML
change reference number (which they will use in SVN)
Because you're introducing this late in the day, you'll need to be diplomatic. So make the change process light and easy to use. Also make sure you check and run the scripts as soon as possible. If you're responsive and do things quickly enough the developers won't chafe under the restricted access.
Getting to there
First of all you need to establish a baseline. Something like DBMS_METADATA will give you CREATE statements for all current objects. You need to organise them in SVN and write the build scripts. Create a toy database and get this right.
This may take some time, so remember to refresh the DDL scripts so they reflect the latest statement. If you have access to a schema comparison tool that would be very handy right now.
Next, sort out the configuration. Hopefully you already know tables contain reference data, otherwise ask the developers.
In your toy database practice zapping the database and building it from scratch. You can use something like Ant or Hudson to automate this if you're feeling adventurous, but at the very least you need some shell scripts to get a build out of SVN.
Making the transition
This is the big one. Announce the new regime to the developers. Get your boss to attend the meeting. Remind the developers to inform you of any changes they make to the database.
That night:
Take a full export with Data Pump
Drop all the application schemas.
Build the application from SVN
Reload the data - but not the data structures - with Data Pump
Hopefully you won't have any structural issues; but if the developer has made changes without telling you you'll know - and they won't have any data in the table.
Make sure you revoke the SYSDBA access as soon as possible.
The developers will need access to a set of schemas so they can write the ALTER scripts. In the developers don't have local personal databases or private schemas to test things I suggest you let them have access to that toy database to test change scripts. Alternatively you can let them keep the application owner access, because you'll be repeating the Trash'n'Rebuild exercise on a regular basis. Once they get used to the idea that they will lose any changes they don't tell you about they will knuckle down and start Doing The Right Thing.
Last word
Obviously this is a lot of vague windbaggery, lacking in solid detail. But that's politics for you.
Postscript
I was at a UKOUG event yesterday, and attended a session by a couple of smart chaps from Regdate. They have a product Source Control for Oracle which provides an interface between (say) SVN and the database. It takes a rather different approach from what I outlined above. But their approach is a sound one. Their tool automates a lot of things, and I think it might help you a lot in your current situation. I must stress that I haven't actually used this product but I think you should check it out - there's a 28 day free trial. Of course, if you don't have any money to spend then this won't help you.
you can find the desierd infos in the following trigger attributes
dictionary_obj_name
dictionary_obj_owner
ora_sysevent
here is the simple ON DATABASE trigger
CREATE OR REPLACE TRIGGER trigger_name
AFTER CREATE OR DROP ON DATABASE
BEGIN
IF dictionary_obj_type = 'TRIGGER'
THEN
INSERT INTO log_table ( trg_name, trg_owner, trg_action) VALUES (dictionary_obj_name,dictionary_obj_owner, ora_sysevent);
END IF;
END;
/
I'm trying to follow Zend Framework's conventions as much as possible.
In my application, is it recommended to write a DbTable, a Mapper, and the Model class for every single table in my DB? Even tables like user_permission? (the fields on that one are user_id, permission_id, and are both the PK).
If the answer is no, then how would a situation like that be modelled?
I wouldn't create a model for each table but for each domain object and add a reference to the other domain. In your case that'd be User and Permission. You'd still have to create a DbTable for each table, but that shouldn't be too hard.
I think survithedeepend has done real good in explaining that even if it's a lot to read. This might be helpfull for further readings on data modeling ;)
If we put aside the rights and wrongs of putting demo data into a live system for a minute (that's a whole separate discussion!), we are being asked to store some demo data in our live system so that it can be credibly demonstrated without the appearance of smoke + mirrors (we want to use the same login page for example)
Since I'm sure this is a challenge many other people must have - I'd be interested to know what approaches have people have devised to separating this data so that it doesn't get in the way of day to day operations on their systems?
As I alluded to above, I'm aware that this probably isn't best practice. :-)
Can you instead, segregate the data into a new database, and just redirect your connection strings (they're not hard-coded, right? right?) to point to the demo database. This way, live data isn't tainted, and your code looks identical. We actually do a three tier-deployment system this way, where we do local development, deploy to QC environments that have snapshots of the live data every few months, and then deploy to live when testing is complete.
FWIW, we're looking at using Oracle's row level security / virtual private database feature to seperate the demo data from the rest.
I've often seen it on certain types of live systems.
For example, point of sale systems in a supermarket: cashiers are trained on the production point of sale terminals.
The key is to carefully identify the test or training data. I wouldn't say that there's any explicit best practice for how to model this in a database - it's going to be applicaiton specific.
You really have to carefully define the scope of what is covered by the test/training scenarios. For example, you don't want the training/test transactions to appear in production reports (but you may want to be able to create reports with this data for training/test purposes).
Completely disagree with Joe. Oracle has a tool to do this regardless of implementation. Before I read your answer I was going to say VPD... But that could have an impact on Production.
Remember Every table in a query changes from
SELECT * FROM tableA
to
SELECT * FROM (SELECT * FROM tableA WHERE Data_quality = 'PROD' <or however you do it>
Every table with a policy that is...
So assuming your test data has to span EVERY table, every table will have to have a policy and every table will be filtered before a SQL can begin working.
You can even hide that column from the users. You'll need to write the policy with some deftness if you do. You'll have to create that value based on how the data is inserted and expose the column to certain admin accounts for maintenance.