What are all the changes other than config changes which will not be captured in update sets - servicenow

I see that any updates to scheduled script execution is not captured in the update set.
What is the criteria to have changes captured?
Can we manually configure the list of items to be and not to be captured in updates set.?

Tables with the attribute update_synch set to true are captured in update sets. This is the attribute set on the collection entry in sys_dictionary.
Scheduled script execution definitions (sysauto_script) should actually be captured in update sets, but the actual sys_trigger record which actually causes the scheduled script to be executed per the schedule is NOT update_synch'd, and that's by design. The sys_trigger table is modified heavily by the actual scheduler service (e.g. resetting next action on every execution, run once jobs created and destroyed for things like workflow timers)
Technically, you could add the update_synch attribute to a sys_dictionary collection entry to cause it to be captured by update sets, but that is highly ill-advised, unless you really know what you're doing.
You can manually add non-update-synch'd records to your update set ad-hoc by way of a script described on the servicenowguru website.

Related

Automatically reloading configuration

I am trying to implement Auto Reload configurations whenever a change to the config parameter is done after the modules are up.
How I am doing is I have set the triggers for the particular table where the configurations are maintained so whenever insert/update/delete performed, I am maintaining another table to keep track of the changes, so whenever the change is performed I am updating the counter and the current time in the second table for that particular row.
ex:
my 2nd Table schema(Tracker table):
tablename|counter|time
configtab, 2, 0001-01-01 00:00:00(just showing dummy values)
So for every update to configtab I will be updating the counter in my second table i.e Tracker table.
In my Go code, I have written 2 methods:
Method 1: which returns the counter and time values
Method 2: which compares the values passed counter and time with the values present in DB
if matched returns false(No changes) and returns true saying the configs were modified
The configurations were loaded into a MAP-> key-> string ,val->slice of strings, and accessed inside various packages
example, I have some LOG configurations there also I am initiating values by fetching from the map.
so if the configurations were changed I am updating the map which I am maintaining, But I am not getting how to send signals to those packages which were using that map to reassign the configuration again
This all seems rather complicated. Can you keep the config in memory? If so just do this:
Config is one map in memory with a mutex
Call sites always ask for config values on every use
As it is in-memory it is fast, as there is one copy it is always up to date, as you always call for fresh values before use you don't need to tell consumers if it changes. The only good reason not to use in memory is if it is shared across processes.
Be aware though that your config is essentially a set of global variables so you should limit its use to things which need to be changed by users after build time, keep stuff which only programmers change as constants in packages.

Spring Boot Task Scheduling - rerun a task conditionally

I have an application where users can schedule to run reports at a certain time. The data for the reports is populated by an overnight batch process which might get delayed (occasionally) due to upstream issues. When that happens, the reports need to be re-tried every few minutes until we have data. I have a db table that records the status of the batch, there will be a row in that table when the data becomes available for this report.
I am thinking along these lines: When the data is not available, the task will cancel itself, but before it does that, it will dynamically schedule another copy of itself to execute every 5 mins. Once the data becomes available, the repeating task will run once and cancel all future runs.
Is this possible? Am I on the right track?
Any help is much appreciated.
Thanks.

How to order ETL tasks in Sql Server Data Tools (Integration Services)?

I'm a newbie in ETL processing. I am trying to populate a data mart through ETL and have hit a bump. I have 4 ETL tasks(Each task filling a particular table in the Mart) and the problem is that I need to perform them in a particular order so as to avoid constraint violations like Foreign Key constraints. How can I achieve this? Any help is really appreciated.
This is a snap of my current ETL:
Create a separate Data Flow Task for each table you're populating in the Control Flow, and then simply connect them together in the order you need them to run in. You should be able to just copy/paste the components from your current Data Flow to the new ones you create.
The connections between Tasks in the Control Flow are called Precendence Constraints, and if you double-click on one you'll see that they give you a number of options on how to control the flow of your ETL package. For now though, you'll probably be fine leaving it on the defaults - this will mean that each Data Flow Task will wait for the previous one to finish successfully. If one fails, the next one won't start and the package will fail.
If you want some tables to load in parallel, but then have some later tables wait for all of those to be finished, I would suggest adding a Sequence Container and putting the ones that need to load in parallel into it. Then connect from the Sequence Container to your next Data Flow Task(s) - or even from one Sequence Container to another. For instance, you might want one Sequence Container holding all of your Dimension loading processes, followed by another Sequence Container holding all of your Fact loading processes.
A common pattern goes a step further than using separate Data Flow Tasks. If you create a separate package for every table you're populating, you can then create a parent package, and use the Execute Package Task to call each of the child packages in the correct order. This is fantastic for reusability, and makes it easy for you to manually populate a single table when needed. It's also really nice when you're testing, as you don't need to keep disabling some Tasks or re-running the entire load when you want to test a single table. I'd suggest adopting this pattern early on so you don't have a lot of re-work to do later.

Parse Cloud Code touch all records in database

I'm wondering is it possible to touch/update all records in some class so they trigger before and after save hooks. I have a lot of records in database and it takes time to update all manually via Parse control panel.
You could write a cloud job which iterates through everything, but it would need to make an actual change to each object or it won't save (because the objects won't be dirty). You're also limited on runtime so you should sort by updated date and run the job repeatedly until nothing is left to do...

Count inserts, deletes and updates in a PowerCenter session

Is there a way in PowerCenter 9.1 to get the number of inserts, deletes and updates after an execution of a session? I can see the data on the log but I would like to see it in a more ordered fashion in a table.
The only way I know requires building the mapping appropriately. You need to have 3 separate instances of the target and use a router to redirect the rows to either TARGET_insert or TARGET_update or TARGET_delete. Workflow Monitor will then show a separate row for the inserted, updated and deleted rows.
There are few ways,
1. You can use $tgtsuccessrows / $TgtFailedRows and assign it to workflow variable
2. Expression transformation can be used with a variable port to keep track of insert/update/delete
3. You can even query OPB_SESSLOG in second stream to get row count inside same session.
Not sure if PowerCenter 9.1 offers a solution to this problem.
You can design your mapping to populate a Audit table to track the number of insert/update/delete's
You can download a sample implementation from Informatica Marketplace block titled "PC Mapping : Custom Audit Table"
https://community.informatica.com/solutions/mapping_custom_audit_table
There are multiple ways like you can create a assignment task attach this assignment task just after you session once the session complete its run the assignment task will pass on the session stats from session to the workflow variable defined at workflow level, sessions stats like $session.status,$session.rowcount etc and now create a worklet having a mapping included in it, pass the session stats captured at workflow level to the newly created worklet and from worklet to the mapping, now once the stats are available at mapping level in the mapping scan these stats (using a SQL or EXP transformation) and then write these stats to the AUDIT table ... attach the combination of assignment task and worklet after each session and it will start capturing the stats of each session after the session completes it run....

Resources