I have a changefeed that works fine until I use the pluck() projection. If I use pluck, it doesn't pick up changes form inserts and deletes in my followers embedded collection.
r.table('users')
.getAll(name, {index: 'followers'})
//.without('password', 'phone')
.pluck('name', 'online') // using pluck doesn't pick up changes in insert/delete from followers
.changes({includeInitial:true});
I could use the without command but that seems more error prone as I would have to keep updating that list anytime I added fields to the user object.
Updates to user's online property gets picked up in the changefeed in either scenario.
Why does pluck not show changes to the followers set/collection property?
I'm not 100% sure, but I think this is because when you add the .pluck('name', 'online') to the end, and then you update the followers array, the changefeed logic applies the pluck and then compares the old value to the new value, and since neither of the plucked fields changed it decides that it's a "trivial" change and drops it. (In general ignoring trivial changes is what you want, since one of the main goals of .pluck.changes is to only be notified when the specified fields change.)
I think this probably isn't the desired behavior, though: it's probably more useful to only drop trivial changes if they don't cause the row to enter or exit the subscribed range. I opened https://github.com/rethinkdb/rethinkdb/issues/5205 to track that change.
This isn't supported right now. Check this ticket and this.
Related
I know you can soft delete in Doctrine (i.e. do not delete a record but rather add a "deleted" value). There's an extension for that.
Now I wonder if there's a way to "soft update" a record. I mean not actually update the record but rather create a new record and make the old one invalid. In the same extension as soft-delete, there's a function loggable, but this one logs to a different table.
I could create a controller that, instead of updating, soft-deletes
(and thus invalidates) the old record, and then creates a new one
with the new values. But I'm unsure if this is a good practice.
Maybe I should create this action on the object itself? But I'm
unsure how to do this.
Edit
I've looked into Versionable and EntityAudit (as suggested by Tomas), but it seems these bundles do way too much. I merely want to check if a given field is different from the old one, and if not: soft-delete the old one (I'm using softDeleteable so a simple remove() will do); then create a new one with the changed values.
So ideally it would lurk in the shadows until an update is performed. Then read from the mapping configuration which fields it needs to watch, and if these fields are indeed different from what's persisted, the program should execute the remove() and persist() commands.
This extension might suit your use case:
simplethings/EntityAudit
It records any changes you want to track.
So it should be pretty easy to modify it to meed your needs.
There is custom field "Lock Flag" in Account BC, namely in S_ORG_EXT_X table. This field is made available in Opportunity BC using join to above table. The join specification is as follows: Opportunity.Account Id = Account.Id. Account Id is always populated when creating new opportunity. The requirement is that for newly created records in Opportunity BC if "Lock Flag" is equal to 'Y', then we should not allow to create the record and we should show custom error message.
My initial proposal was to use a Runtime Event that is calling Data Validation Manager business service where validation rule is evaluated and error message shown. Assuming that we have to decide whether to write record or not, the logic should be placed in PreWriteRecord event handler as long as WriteRecord have row already commited to database.
The main problem was how to determine if it is new record or updated one. We have WriteRecordNew and WriteRecordUpdated runtime events but they are fired after record is actually written so it doesn't prevent user from saving record. My next approach was to use eScript: write custom code in BusComp_PreWriteRecord server script and call BC's method IsNewRecordPending to determine if it is new record, then check the flag and show error message if needed.
But unfortunately I am faced with another problem. That joined field "Lock Flag" is not populated for newly created opportunity records. Remember we are talking about BC Opportunity and field is placed in S_ORG_EXT_X table. When we create new opportunity we pick account that it belongs to. So it reproduceable: OpportunityBC.GetFieldValue("Lock Flag") returns null for newly created record and returns correct value for the records that was saved previously. For newly created opportunities we have to re-query BC to see "Lock Flag" populated. I have found several documents including Oracle's recomendation to use PreDefaultValue property if we want to display joined field value immediately after record creation. The most suitable expression that I've found was Parent: BCName.FieldName but it is not the case, because active BO is Opportunity and Opportunity BC is the primary one.
Thanks for your patience if you read up to here and finally come my questions:
Is there any way to handle PreWrite event and determine if it is new record or not, without using eScript and BC.IsNewRecordPending method?
How to get value of joined field for newly created record especially in PreWriteRecord event handler?
It is Siebel 8.1
UPDATE: I have found an answer for the first part of my question. Now it seems so simple to me that I am wondering how I haven't done it initially. Here is the solution.
Create Runtime Event triggered on PreWriteRecord. Specify call to Data Validation Manager business service.
In DVM create a ruleset and a rule where condition is
NOT(BCHasRows("Opportunity", "Opportunity", "[Id]='"+[Id]+"'", "AllView"))
That's it. We are searching for record wth the same Row Id. If it is new record there should't be anything in database yet (remember that we are in PreWriteRecord handler) and function returns FALSE. If we are updating some row then we get TRUE. Reversing result with NOT we make DVM raise an error for new records.
As for second part of my question credits goes to #RanjithR who proposed to use PickMap to populate joined field (see below). I have checked that method and it works fine at least when you have appropriate PickMap.
We Siebel developers have used scripting to correctly determine if record is new. One non scripting way you could try is to use RuntimeEvents to set a profileattribute during the BusComp NewRecord event, then check that in the PreWrite event to see if the record is new. However, there is always a chance that user might undo a record, those scenarios are tricky.
Another option, try invokine the BC Method:IsNewRecordPending from RunTime event. I havent tried this.
For the second part of the query, I think you could easily solve your problem using a PickMap.
On Opportunity BC, when your pick Account, just add one more pickmap to pick the Locked flag from Account and set it to the corresponding field on Opportunity BC. When the user picks the Account, he will also pick the lock flag, and your script will work in PreWriteRecord.
May I suggest another solution, again, I haven't tried it.
When new records are created, the field ModificationNumber will be set to 0. Every time you modify it, the ModificationNumber will increment by 1.
Set a DataValidationManager ruleset, trigger it from PreSetFieldValue event of Account field on Opportunity BC. Check for the LockFlag = Y AND (ModificationNumber IS NULL OR ModificationNumber = 0)) and throw error. DVM should throw error when new records are created.
Again, best practices say don't use the ModNumbers. You could set a ProfileAttribute to signal NewRecord, then use that attribute in the DVM. But please remember to clear the value of ProfileAttribute in WriteRecord and UndoRecord.
Let us know how it went !
I'm writing a simple application using DataMapper. It is somewhat crucial that I maintain consistent IDs (serial property) in my database (which may change freely), so I wrote this simple script that goes through every record and fixes the IDs so that they stay consistent (e.g. 1, 2, 3...).
The problem is, every time I add a new record, it's added with a new ID that breaks the consistency - as if the previous records weren't fixed.
How can I prevent this behavior? Or rather, is there an easier way to maintain a logical progression of IDs? I have a distinct feeling I'm not supposed to alter the IDs by hand.
datamapper usually creates sequential ids
but this sequence can differ from your "logical order".
Examples:
you create the strip-objects in another sequence then you want them to be ordered
you create provisional strip-objects (prototypes) and delete them again
..
I think you should't try to force datamapper to use your ids then. Instead I recommend an extra field like "nekkoru_number" which you can calculate after your own ideas. In your case using a unique name instead of a number may be a good idea too.
Think also of use cases like
inserting an object later
reordering the objects
I'm curious what the difference is between overriding a table's modifiedField method versus overriding the update method.
In our case, we are working on switching the field datatype on a table. Since we cannot just change the data type of the field, we make a second field, and copy the information from the first into the second. Eventually, we update all the UI elements (forms and reports namely) to point to the new field, and then remove the old field. To help with copying the information from one field to another, we have been overriding the update method on the table to copy the value from the first field to the second.
I know this would probably be easier to maintain using the modifiedField method, but I'm curious if there are any significant differences (performance, missed updates, etc) by using the update method instead.
The main difference is that the code in modifiedField method is executed without writing into the Database. This way you can change the value of field2, but if a user close the form without saving the record then no updates will be in the DB. While using an update method you certainly write the changes.
I have an custom entity which needs to have a case number for an XRM Application, can I generate a case number from the Service -> Case.
If this is not possible, how can I do this with a plugin, I've looked at the crmnumbering.codeplex.com but this doesn't support 2011, anybody outthere have a solution or should I rewrite it myself?
thanks
I've ran into this same type of issue (I need a custom # for an entity). Here's how you can do it:
Create an Entity called "Counter"
Add a field called "new_customnumber", make it a string or a number depending on what you want
Create a new record for that entity with whatever you want in the new_customnumber field (let's say "10000")
Create a plugin (EntityNumberGenerator) that goes out and grabs that record (you'll probably want to set the security on this record/entity really tight so no one can mess with the numbers)
On Create of the "custom entity" fire the plugin. Grab the value in new_customnumber save it to the "custom entity" (let's say in a "case" field) increment the new_customnumber and save it to the Counter entity.
Warning, I'm not sure how this is with concurrency. Meaning I'm not sure if 2 custom entities being created at the same time can grab the same number (I haven't ran into an issue yet). I haven't figured out a way to "lock" a field I've retrieved in a plugin (I'm not sure it's possible).
You will be unable to create a custom number for custom entities from the normal area you set a case number.
Look at the CRM2011sdk\sdk\samplecode\cs\plug-ins\accountnumberplugin.cs plugin. It's really similar to what you want.
Ry
I haven't seen one for 2011 yet. Probably easiest to write it yourself.
I've always created a database with a table with a single column which is an IDENTITY column. Write an SP to insert, save the IDENTITY value to a variable, and DELETE the row all within a transaction. Return the variable. Makes for a quick and easy plug-in and this takes care of any concurrency issues.
The performance is fast and the impact to your SQL server is minimal.