Activity Log is something wrong - datatable

To Whom It May Concern,
My "Activity" log is something wrong. Would you please tell me what happend and how to resolvve this problem.
Please confirm the attaced file.
The activity logs show "Failed:Create table", but in the text showed "has created".
What did it mean? Is this sucsessfully?
Confirming the data table, I thought data table is created sucsessfully.
Regards,
Yuka
BigQuery Activity log

It is unfortunately a problem when Firebase exporting data to BigQuery, it tries to create a table (which may have existed) first and receives this "Failed:create table" when the table actually exists. You may fire a bug to Firebase.

Related

Assign account to other user

I try to assign an account to another user using the assign button (standard microsoft) but I become this error:
Sql error: The operation attempted to insert a duplicate value for an attribute with a unique constraint. CRM ErrorCode: -2147012606 Sql ErrorCode: -2146232060 Sql Number: 2627
Can anyone help me?
Thanks
Sounds like you have a duplicate or orphaned record in your PrincipalObjectAccess table. Are you on prem with access to SQL Server? You should check for an existing relationship in that table between the Account guid and User guid, and delete that row if so. If one IS orphaned you may need to also see if your Deletion Service cleanup job is running to completion. Best practice is to schedule that job to run during low usage times.
https://us.hitachi-solutions.com/blog/unmasking-the-crm-principalobjectaccess-table/
Deletion Service information: https://darrenliu.wordpress.com/2014/04/03/crm-2013-maintenance-jobs/
Does it happen if you try and assign the problem record to a different user?
Does the user you are assigning the record to have full permissions to that record and any records that are used on that form via quick views etc?
This error may also be caused by entity alternate keys, if in use.
https://learn.microsoft.com/en-us/dynamics365/customer-engagement/customize/define-alternate-keys-reference-records
Also check the plugin trace logs just in case something is showing up in there that may help, although I know you have said you don't believe any Plugins or workflows are present it could be an OOTB one causing issues.

Hive managed table drop doesn't delete files on HDFS. Any solutions?

While deleting managed tables from the hive, its associated files from hdfs are not being removed (on azure-databricks). I am getting the following error:
[Simba]SparkJDBCDriver ERROR processing query/statement. Error Code: 0, SQL state: org.apache.spark.sql.AnalysisException: Can not create the managed table('`schema`.`XXXXX`'). The associated location('dbfs:/user/hive/warehouse/schema.db/XXXXX) already exists
This issue is occurring intermittently. Looking for a solution to this.
I've started hitting this. It was fine for the last year then something is going on with the storage attachment I think. Perhaps enhancements going on in the back ground that are causing issues (PaaS!) As a safeguard I'm manually deleting the directly path as well dropping the table until I can get a decent explanation of what's going on or get a support call answered.
Use
dbutils.fs.rm("dbfs:/user/hive/warehouse/schema.db/XXXXX", true)
becarefull with that though! Get the path wrong and it could be tragic!
So sometimes the metadata(schema info of Hive table) itself gets corrupted. So whenever we try to delete/drop the table we get errors as, spark checks for the existance of the table before deleting.
We can avoid that if we use hive clint to drop the table, as it avoids checking the table's existence.
Please refer this wonder databricks documentation

Defering drop table after exchange partition

I have two tables:
ld_tbl - a partitioned table.
tgt_tabl - a non partitioned table.
In my program I'm executing
alter table ld_tbl exchange partition prt with table tgt_table;
and after the exchange has finished I'm executing a drop to the ld_tbl.
The problem is that if someone has fired a query through the tgt_tabl it throws exception:
ORA-00813: object no longer exists
Even I drop only the ld_tbl and didn't touch the tgt_tabl. After several tests, I'm sure that it's the drop which causes the exception. According to this information: Object no longer exists, the solution is to defer the drop.
My question is: how much time is needed between the drop and the exchange? How can I know that operation like drop will not hurt the other table?
Thanks.
"how much time need to be between the drop and the exchange?"
The pertinent question is, why is anybody running queries on the TGT_TABL. If I understand your situation correctly that is a transient table, used for loading data through Partition Exchange. So no business user ought to be querying it (they should wait until the data goes live in the partitioned table).
If the queries are coming from non-business users (DBAs, support staff) my suggestion would be to just continue as you do now, and send an email to those people explaining why they may occasionally get ORA-00813 errors.
If the queries are coming from business users then it's more difficult. There is no point in deferring the drop, because somebody could be running a query whenever you schedule it. You need to track down the users who are running these queries, discover why they are doing it and figure out whether there's some other way of satisfying the need.
But I don't thinks there's s technical fix you could apply. By using partition exchange you are already minimizing the window in which this error can occur.

Is there a quick way to refresh Oracle table structure in Crystal Report

I have a CR using multiple Oracle tables with complex links. I have to change the structure of 1 table (add a field actually) but the new structure is not reflected in the crystal report. I have tried refresh or update the location but the newly added field could not be seen. I know there is a not so clever way to "solve" the problem is to delete the table and add it back but by doing that, I will have to recreate the link, rearrange the reports, recreate the calculated fields.... basically rewrite the whole report... any advice to help me quickly update the structure of the my ORACLE table would be highly appreciate.
Thanks,
What you actually need to do is to Verify the database.
Well, found that out ...accidentally. I updated the database location and refreshed it but the new schema was not updated. I decided to run the report anyway... At the time of the report execution, CR realized there was a change in the table structure and updated it! So I think the solution to this is updating the location then execute the report, CR will update it automatically. Do not need to delete the table and added it back because doing so will delete all the links and fields the tables was referring to.

View the number pending requests to access a locked row or table?

I understand that, in Oracle, you can check what rows or tables are locked and who is locking them, but is there a way to see how many pending requests are in queue trying to access that information at any given time?
I know that anything that would require an active verification of this is probably bad practice. I'm just trying to show something to someone and that would help me greatly get my point across.
Does DBA_WAITERS show what you're looking for? You can join to V$SESSION to see who is holding the lock and who is waiting for the resource, or other views to get other information. I'm not sure if that's quite what you're after though.

Resources