How can I find which rule/process is deleting records in ServiceNow? - servicenow

I have an instance of ServiceNow where SLA tasks are disappearing from associated incidents, even after an SLA repair has been run. According to the deleted records table the deleted records are showing as updated by 'system', which makes me think a rule or script is doing it - how to I determine the cause?

Related

CRM 2015 - Delete takes almost 35 to 40 seconds if I select multiple records from Project-Product, Quote-Product, Order-Project (not a bulk delete)

In CRM 2015, It will takes 35 to 40 seconds to delete a more then 5 records from project product, quote product or order product list. It's not a bulk delete.
Sometimes I can see below unresponsive/wait screen in between delete process.
How to fix this issue or reduce the time?
Due to internal processes including recalculation of the parent Order values, deleting child records like an Order Product can take a while.
When you delete a single Order Product, the system will recalculate the parent Order's total, etc. This happens for each record, and deleting multiple records of course takes longer.
There may also be other processes happening - either custom or system processes. You can check if there are any custom ones, but the system processes are largely a black box.
I have seen situations where a client occasionally needed to create an invoice with over 10,000 lines. Since creating each line triggers a recalculation, normal automation options were timing out. I wound up creating a console app to add the lines to the monster invoices in batches.

Oracle track changes in database

Good morning,
We have an application which used to create daily reports.
When a daily report is finished it will be locked.
But,sometimes daily report must be changed after locking.
Therefore report will be unlocked.
After the changes made the report will be locked again.
And what we would like to do:
When report is unlocked we would like to take a snapshot
When report is locked again we would like to take another snapshot of the records to compare with the previous snapsnot (at the moment of unlocking) and see what changes were made in the records. We would like to see the before and after values of each field.
The daily report means around 40 tables and more hundred fields. So, when something has been changed in daily report it can happen in a few hundred fields of around 40 tables.
We are interested only to compare the status at the moment of unlock and at the moment of locking again. (With other words, we are not interested in all the changes made between unlock and locking again)
What is the best/recommended way to do this?
Thanks in advance for the answers.
One way to do that is to go for FDA (Flashback Data Archive)
Kindly check the below links that explain how you can do that.
https://oracle-base.com/articles/11g/flashback-and-logminer-enhancements-11gr1#flashback_data_archive
https://oracle-base.com/articles/12c/flashback-data-archive-fda-enhancements-12cr1

Mixpanel delete user does not delete events

I'm trying to clean up my mixpanel data. I had test users being tracked so I removed them in order to better understand retention. However, the users are still showing up in the retention report aggregated data.
For example, I only have 138 users in my explore tab but significantly more in the sum of people in the retention report.
Has the event data not been deleted? Or am I missing something? Should deleting the profile cascade delete the associated event data?

Defering drop table after exchange partition

I have two tables:
ld_tbl - a partitioned table.
tgt_tabl - a non partitioned table.
In my program I'm executing
alter table ld_tbl exchange partition prt with table tgt_table;
and after the exchange has finished I'm executing a drop to the ld_tbl.
The problem is that if someone has fired a query through the tgt_tabl it throws exception:
ORA-00813: object no longer exists
Even I drop only the ld_tbl and didn't touch the tgt_tabl. After several tests, I'm sure that it's the drop which causes the exception. According to this information: Object no longer exists, the solution is to defer the drop.
My question is: how much time is needed between the drop and the exchange? How can I know that operation like drop will not hurt the other table?
Thanks.
"how much time need to be between the drop and the exchange?"
The pertinent question is, why is anybody running queries on the TGT_TABL. If I understand your situation correctly that is a transient table, used for loading data through Partition Exchange. So no business user ought to be querying it (they should wait until the data goes live in the partitioned table).
If the queries are coming from non-business users (DBAs, support staff) my suggestion would be to just continue as you do now, and send an email to those people explaining why they may occasionally get ORA-00813 errors.
If the queries are coming from business users then it's more difficult. There is no point in deferring the drop, because somebody could be running a query whenever you schedule it. You need to track down the users who are running these queries, discover why they are doing it and figure out whether there's some other way of satisfying the need.
But I don't thinks there's s technical fix you could apply. By using partition exchange you are already minimizing the window in which this error can occur.

Database Project Insists on "Rebuilding" Table on Deployment for Dropped Columns

So I have a VS2010 Database Project that I am deploying, with a few schema changes. I have one table in particular that the VSDBCMD insists on "rebuilding" i.e. rename->create->copy->drop
The only changes for this table are dropping some columns, which could be handled by the simply I dunno, dropping the columns. Normally I wouldn't mind, except this particular table is called "Attachments" and weighs in at 15 gigs or so. Which takes a long time, locks up the database and fails locally, as I don't have 15+ gigs free, and times out remotely in our testing environment.
Can anyone direct me to the rules VSDBCMD follows for changing the schema when it deploys?
Or perhaps you have experienced similar issues and have a suggestion?
Thanks!
VSDBCMD just 'likes' rebuilding tables too often, and I don't have the 'magic vsdbcmd manual' for when it chooses to rebuild a table unfortunately, but I don't trust the output of VSDBCMD on a production database anyway without manual checking first anyway.
There's a setting in the 'dbname.sqldeployment' file that allows the setting 'IgnoreColumnOrder' that might help prevent rebuilding the table (maybe it's triggering the rebuild because the column index has changed).
In your case I would just run a manually created script on your DB.
Heck, writing 'alter table Attachments drop column uselessData' would've probably cost you 10% of the time you put into asking this question in the first place :)

Resources