I try to assign an account to another user using the assign button (standard microsoft) but I become this error:
Sql error: The operation attempted to insert a duplicate value for an attribute with a unique constraint. CRM ErrorCode: -2147012606 Sql ErrorCode: -2146232060 Sql Number: 2627
Can anyone help me?
Thanks
Sounds like you have a duplicate or orphaned record in your PrincipalObjectAccess table. Are you on prem with access to SQL Server? You should check for an existing relationship in that table between the Account guid and User guid, and delete that row if so. If one IS orphaned you may need to also see if your Deletion Service cleanup job is running to completion. Best practice is to schedule that job to run during low usage times.
https://us.hitachi-solutions.com/blog/unmasking-the-crm-principalobjectaccess-table/
Deletion Service information: https://darrenliu.wordpress.com/2014/04/03/crm-2013-maintenance-jobs/
Does it happen if you try and assign the problem record to a different user?
Does the user you are assigning the record to have full permissions to that record and any records that are used on that form via quick views etc?
This error may also be caused by entity alternate keys, if in use.
https://learn.microsoft.com/en-us/dynamics365/customer-engagement/customize/define-alternate-keys-reference-records
Also check the plugin trace logs just in case something is showing up in there that may help, although I know you have said you don't believe any Plugins or workflows are present it could be an OOTB one causing issues.
Related
I want to prevent users to login in Oracle BI12c with the same "username" more than once.
also, I checked many documents and see parameter like "Max Session Limit" but its not worked for my problem.
thank for your guidance to have any solution
Just as a wrap-up. OBIEE is an analytical platform and you have to think about connections in a different way. As cdb_dba said:
1.) take a step back
2.) think about what you want to do
3.) learn and comprehend how the tool works and does things
4.) decide on how you implement and control things by matching #2 and #3
You can configure this using Database Resource Manager, or by creating a customized profile for the group of users you want to limit sessions for.
Oracle's documentation on profiles can be found at the following link. You want to define the SESSIONS_PER_USER parameter as 1. https://docs.oracle.com/database/121/SQLRF/statements_6012.htm#SQLRF01310
Edit based on the additional Requirements:
After giving it some thought, I'm not sure if you could do something like this at the profile level, You'll probably have to do something like creating a trigger based on the v$session table. v$session has SCHEMANAME, OSUSER, and MACHINE. Since your users are sharing the same schema, you may be able to create a trigger that throws an error like "ERROR: Only one Connection per User/Machine" based on either the MACHINE or the OSUSER columns in the v$session table. This is less than ideal for a number of reasons, and your developers will probably hate you, but if you absolutely need to do something like this, it is possible.
I have two tables:
ld_tbl - a partitioned table.
tgt_tabl - a non partitioned table.
In my program I'm executing
alter table ld_tbl exchange partition prt with table tgt_table;
and after the exchange has finished I'm executing a drop to the ld_tbl.
The problem is that if someone has fired a query through the tgt_tabl it throws exception:
ORA-00813: object no longer exists
Even I drop only the ld_tbl and didn't touch the tgt_tabl. After several tests, I'm sure that it's the drop which causes the exception. According to this information: Object no longer exists, the solution is to defer the drop.
My question is: how much time is needed between the drop and the exchange? How can I know that operation like drop will not hurt the other table?
Thanks.
"how much time need to be between the drop and the exchange?"
The pertinent question is, why is anybody running queries on the TGT_TABL. If I understand your situation correctly that is a transient table, used for loading data through Partition Exchange. So no business user ought to be querying it (they should wait until the data goes live in the partitioned table).
If the queries are coming from non-business users (DBAs, support staff) my suggestion would be to just continue as you do now, and send an email to those people explaining why they may occasionally get ORA-00813 errors.
If the queries are coming from business users then it's more difficult. There is no point in deferring the drop, because somebody could be running a query whenever you schedule it. You need to track down the users who are running these queries, discover why they are doing it and figure out whether there's some other way of satisfying the need.
But I don't thinks there's s technical fix you could apply. By using partition exchange you are already minimizing the window in which this error can occur.
I'm trying to make a database table for every single username. I see that for every username, I can add more columns in it's row, but I want to attribute a full table for each one. How can I do that?
Thanks,
Eli
First let me say, what you are trying to do sounds like really, really bad database design and you should rethink your idea of creating a table per user. To get help for this you should add way more detail about the reasoning to your question to get a good answer. As far as I know there is also a maximum number of classes you can create on Parse so sooner or later you will run into problems, either performance wise or due to technical limitations of the platform.
That being said, you can use the Schema API to programmatically create/delete/update tables of your Parse app. It always requires the master key, so doing this from the client side is not recommended for security reasons. You could put this into a Cloud Code function for example and call this one from your app/admin tool to create a new table for a user on the fly or delete a table of a user.
Again, better don't do it and think about a better way to design your database, it would be out of scope here to discuss it.
I'm trying to initialize my data in my Azure Data Tables but I only want this to happen once on the server at startup (i.e. via the WebRole Role Entry OnStart routine). The problem is if I have multiple instances starting up at the same time then potentially either one of those instances can add records to the same table at the same time hence duplicating the data at runtime.
Is there there like an overarching routine for all instances? An application object in which I can shove a value into and check it in each of the instances to see if the tables have been created or not? A singleton of some sort that azure exposes?
Cheers
Rob
No, but you could use a Blob lease as a mutex. You could also use a table lock in SQL Azure, if you're using that.
You could also use a Queue, and drop a message in there and then just one role would pick up the message and process it.
You could create a new single instance role that does this job on role start.
To be really paranoid about this and address the event of failure in the middle of writing the data, you can do something even more complex.
A queue message is a great way to ensure transactional capabilities as long as the work you are doing can be idempotent.
Each instance adds a message to a queue.
Each instance polls the queue and on receiving a message
Reads the locking row from the table.
If the ‘create data state’ value is ‘unclaimed’
Attempts to update the row with a ‘in process’ value and a timeout expiration timestamp based on the amount of time needed to create the data.
if the update is successful, the instance owns the task of creating the data
So create the data
update the ‘create data state’ to ‘committed’
delete the message
else if the update is unsuccessful the instance does not own the task
so just delete the message.
Else if the ‘create data’ value is ‘in process’, check if the current time is past the expiration timestamp.
That would imply that the ‘in process’ failed
So try all over again to set the state to ‘in process’, delete the incomplete written rows
And try recreating the data, updating the state and deleting the message
Else if the ‘create data’ value is ‘committed’
Just delete the queue message, since the work has been done
I have a fairly simple domain model involving a list of Facility aggregate roots. Given that I'm using CQRS and an event-bus to handle events raised from the domain, how could you handle validation on sets? For example, say I have the following requirement:
Facility's must have a unique name.
Since I'm using an eventually consistent database on the query side, the data in it is not guaranteed to be accurate at the time the event processesor processes the event.
For example, a FacilityCreatedEvent is in the query database event processing queue waiting to be processed and written into the database. A new CreateFacilityCommand is sent to the domain to be processed. The domain services query the read database to see if there are any other Facility's registered already with that name, but returns false because the CreateNewFacilityEvent has not yet been processed and written to the store. The new CreateFacilityCommand will now succeed and throw up another FacilityCreatedEvent which would blow up when the event processor tries to write it into the database and finds that another Facility already exists with that name.
The solution I went with was to add a System aggregate root that could maintain a list of the current Facility names. When creating a new Facility, I use the System aggregate (only one System as a global object / singleton) as a factory for it. If the given facility name already exists, then it will throw a validation error.
This keeps the validation constraints within the domain and does not rely on the eventually consistent query store.
Three approaches are outlined in Eventual Consistency and Set Validation:
If the problem is rare or not important, deal with it administratively, possibly by sending a notification to an admin.
Dispatch a DuplicateFacilityNameDetected event, which could kick off an automated resolution process.
Maintain a Service that knows about used Facility names, maybe by listening to domain events and maintaining a persistent list of names. Before creating any new Facility, check with this service first.
Also see this related question: Uniqueness validation when using CQRS and Event sourcing
In this case, you may implement a simple CRUD style service that basically does an insert in a Sql table with a primary key constraint.
The insert will only happen once. When duplicate commands with the same value that should only exist one time hits the aggregate, the aggregate calls the service, the service fails the Insert operation due to a violation of the Primary Key constraint, throws an error, the whole process fails and no events are generated, no reporting in the query side, maybe a reporting of the failure in a table for eventual consistency checking where the user can query to know the status of the command processing. To check that, just query again and again the Command Status View Model with the Command Guid.
Obviously, when the command holds a value that does not exists in the table for primary key checking, the operation is a success.
The table of the primary key constraint should be only be used as a service, but, because you implemented Event sourcing, you can replay the events to rebuild the table of primary key constraint.
Because uniqueness check would be done before data writing, so the better method is to build a event-tracking service, which would send a notification when the process finished or terminated.