How to run only failed sessions in a workflow - informatica-powercenter

In a workflow there are sessions connected in parallel and in sequence. Suppose some sessions which are in parallel and in sequential mode are failed, How do I restart the workflow with only failed sessions. How can I design this in Informatica?

Turn 'suspend on error' for workflow
Turn 'restart on recovery' for each session in workflow
Now if any session fail workflow will be suspended until you fix the problem and hit recover on workflow in monitor. When you do so it cause to restart only failed sessions.

A large publishing client asked us to implement something similar to what you asked. We crated a database table to keep track of successful sessions within a workflow. Each session will have a mapping at the end that adds an entry to database which says I passed or failed. When we try to run in a recovery mode we query the database at the beginning of each session to find out if we need to run this session or not.
We also provided a web interface to this table where business users can manually choose which session to run or escape based on their needs.

Recovery option will work only if you have "workflow recovery" turned on in repository. If you dont, then you can check option "fail workflow if task fails" at individual session level and create condition on link that connect workflow to each other. Disadvantage of this method is that your workflow will appear failed and wont execute next sessions until failed one are fixed.
thanks.

Related

How to get all session details from PMCMD in informatica when the workflow is a concurrent workflow

I have configured the workflow as a concurrent workflow with same instance name and tried to retrieve the details using getsessionstatistics/gettaskdetails. Since it is a concurrent workflow, I am not able to get the details of all session using PMCMD in a shell script. And I saw one command in Informatica documentation as "getTaskDetailsEx". If I run this in putty, it is showing ERROR: Unknown command [gettaskdetailex]. I have tried with all lower cases also.
Can someone please suggest how to get details using "gettaskdetailex" or any other way to get all session details of concurrent workflow

Terraform and OCI : "The existing Db System with ID <OCID> has a conflicting state of UPDATING" when creating multiple databases

I am trying to create 30 databases (oci_database_database resource) under 5 existing db_homes. All of these resources are under a single DB System :
When applying my code, a first database is successfully created then when terraform attempts to create the second one I get the following error message : "Error: Service error:IncorrectState. The existing Db System with ID has a conflicting state of UPDATING", which causes the execution to stop.
If I re-apply my code, the second database is created then I get the same previous error when terraform attempts to create the third one.
I am assuming I get this message because terraform starts creating the following database as soon as the first one is created, but the DB System status is not up to date yet (still 'UPDATING' instead of 'AVAILABLE').
A good way for the OCI provider to avoid this issue would be to consider a database creation as completed when the creation is indeed completed AND the associated db home and db system's status are back to 'AVAILABLE'.
Any suggestion on how to adress the issue I am encountering ?
Feel free to ask if you need any additional information.
Thank you.
As mentioned above, it looks like you have opened a ticket regarding this via github. What you are experiencing should not happen, as terraform should retry after seeing the error. As per your github post, the person helping you is in need of your log with timestamp so they can better troubleshoot. At this stage I would recommend following up there and sharing the requested info.

Step Failure not reported by Composed Task Runner or reflected in Spring Cloud Dataflow Tables

Currently we are using Spring Cloud Dataflow to run a sequence of apps we have created based on a definition. Each of the apps we have made are spring batch jobs, with individual steps. The current issue we are having is that when one of these steps inside the app's batch job fails, it is reflected as expected in the step_execution, job_execution, and task_execution tables in the scdf database. However, we are not able to rerun any scdf job that has failed in an app from the top scdf level because it seems the row entry in the step_execution table for SCDF's step related to the overall app never propagates to FAILED in the status column, instead always being COMPLETED no matter what happens. Below I have included a picture which gets across what I am saying. test-simple8-test-app is the app we have created, while check-step, sleep-step, and should-error-step are steps inside the job for that app. You can see in the should-error-step that it has FAILED for both ExitCode and Status, while the entry for the app itself has COMPLETED for status and FAILED for ExitCode.
Relevant Table
We have tried altering what we report in the task_execution table since we saw CTR is looking for certain fields there, but it still seems it does not affect the Status column in step_executions. If we manually change the entry in the db to FAILED for that value, it proceeds as we would expect and as is normal for spring batch, in that it resumes the job from that app and re executes it.
Is there a good way to relieve this problem, or is it a problem with the way we are approaching it?
Edit: Added Flow Diagram for better clarity

Realtime Workflow in Dynamics 365 Not Triggering when Record is Deleted

I'm creating a Realtime Workflow in Dynamics 365 that is set to trigger on both "Before Record Status Changes" as well as "Before Record Is Deleted". I can confirm that it is firing and working well for the Record Status Change case, but for some reason it's not firing when I delete the same record in question.
Would anyone have any ideas why this could be happening? I've even looked at the Process Session history and can see that only Record Status Change instances have fired. None of the Delete instances have a log entry.
I should also add that the workflow is extremely simple and doesn't do anything different for a delete vs a status change, so any record that works properly for the Status Change should have the same result for the Delete.
Any help would be greatly appreciated.
This could be due to missing some required security privileges for the user running the real-time workflow. The privileges listed here are listed in the Microsoft CRM/Dynamics 365 documentation at Required security privileges for real-time workflows
"A security privilege named Activate Real-time Processes
(prvActivateSynchronousWorkflow) is required to activate real-time
workflows so that they can be executed. The Execute Workflow Job
(prvWorkflowExecution) privilege is required to start the workflow.
Note that when opening a security role (Settings - Security - Security Roles, these privileges will be listed as "Activate Real-time processes" and "Execute Workflow job" when looking on the Customization tab of the security role. "
You can check scope of workflow. There is no platform bug I just created a RealTime WF on account delete and status change and that to Before... it worked fine.
Does any of the step skiping some logic.
Try creating Expense on Delete, if record gets deleted Expense entry will be created -just an example what I tried.
Keep WF Log Retension checked in order to track errors.

Open a JDBC connection in a specific AS400 subsystem

I have a web service that calls some stored procedure on a AS400 via JTOpen.
What I would like to do is that the connections used to call the stored procedures was opened in a specific subsystem with a specific user, instead of qusrwrk/quser as now (default).
I think I can be able to clone the qusrwrk subsystem to make it start with a specific user, but what I cannot figure out is the mechanism to open the connection in the specific subsystem.
I guess there should be a property at connection level to say subsystem=MySubsystem.
But unfortunatly I haven't found that property.
Any hint would be appreciated.
Flavio
Let the system take care of the subsystem the job database server job is started in.
You should just focus on the application (which is what IBM i excels in).
If need be, you can tweak subsystem parameters for QUSRWRK to improve performance by allocating memory, etc.
The system uses a pool of prestarted jobs as described in the FAQ: When I do WRKACTJOB, why is the host server job running under QUSER instead of the profile specified on the AS400 object?
To improve performance, the host server jobs are prestarted jobs running under QUSER. When the Toolbox connects to a host server job in order to perform an API call, run a command, etc, a request is sent from the Toolbox to an available prestarted job. This request includes the user profile specified on the AS400 object that represents the connection. The host server job receives the request and swaps to the specified user profile before it runs the request. The host server itself originally runs under the QUSER profile, so output from the WRKACTJOB command will show the job as being owned by QUSER. However, the job is in fact running under the profile specified on the request. To determine what profile is being used for any given host server job, you can do one of three things:
1. Display the job log for that job and find the message indicating which user profile is used as a result of the swap.
2. Work with the job and display job status attributes to view the current user profile.
3. Use Navigator for i to view all of the server jobs, which will list the current user of each job. You can also use Navigator for i to look at the server jobs being used by a particular user.

Resources