kogito return process instance not found after restart the service - quarkus

I need some advice and explanation, according to my case. here is my kogito setup:
kogito service --> dataIndex-postgresql-->Kogito Management Console --> Kogito Task Console.
I create simple BPMN, it is just Task User.
Test scenario:
Service kogito, Console Management and task console Run,Then I submit the workflow until the phases complete in Task Console management.
Service kogito, Console Management and task console Run. Submit the wofkflow then the task success waiting in task console, then i stop the kogito service then run it again the kogito service. the task console will returned error "process instance with id 2493dndnxxx not found. when i try to post the task console.
I don't understand why. I really appreciate if some one can explain for this case, it is normal or not ?.
Thank you
i expect some one can explain this is normal situation or not ?.
in my understanding the process instance Id can submited the task even i stop the kogito service because we have dataIndex with postgresql.

A Kogito service is ephemeral by default, which means any process started will be lost if you restart the service. To maintain the state, you must add one of the persistence add-ons to your Kogito runtime project. See the docs here for more information about the supported persistence types https://docs.kogito.kie.org/latest/html_single/#con-persistence_kogito-developing-process-services.
In this other section,there are also some more details about how that can be combined when using other services like Data Index, which also supports different persistency types: https://docs.kogito.kie.org/latest/html_single/#con-data-index-service_kogito-configuring

Related

SingleInstance() not working only on cluster

I am building a solution that implements a RESTful service for interacting with metadata related to federated identity.
I have a class that is registered with Autofac like this:
builder.RegisterType<ExternalIdpStore>()
.As<IExternalIdpStore>()
.As<IStartable>()
.SingleInstance();
I have a service class (FedApiExtIdpSvc) that implements a service that is a dependency of an ASP.NET controller class. That service class has this IExternalIdpStore as a dependency. When I build and run my application from Visual Studio (which is in Debug mode), I get one instance of ExternalIdpStore injected, it's constructor only executes once. When I initiate a controller action that ends up calling a particular method of my ExternalIdpStore class, it works just fine.
When my application is built via Azure DevOps (which is in Release mode), and deployed to a Kubernetes cluster running under Linux, I initially see one call to the ExternalIdpStore class' constructor right at application startup. When I initiate the same controller action as above, I see another call to the ExternalIdpStore's constructor, and when the same method of the class is called, it fails because the data store hasn't been initialized (it's initialized from calling the class' Start method that implementes IStartable).
I have added a field to the class that gets initialized in the constructor to a GUID so I can confirm that I have two different instances when on cluster. I log this value in the constructor, in the Startup code, and in the method eventually called when the controller action is initiated. Logging is confirming that when I run from Visual Studio under Windows, there is just one instance, and the same GUID is logged in all three places. When it runs on cluster under Linux, logging confirms that the first two log entries reference the same GUID, but the log entry from the method called when the controller action is initiated shows a different GUID, and that a key object reference needed to access the data store is null.
One of my colleagues thought that I might have more than one registration. So I removed the explicit registration I showed above. The dependency failed to resolve when tested.
I am at a loss as to what to try next, or how I might add some additional logging to diagnose what is going on.
So here's what was going on:
The reason for getting two sets of log entries was that we have two Kubernetes clusters sending log entries to Splunk. This service was deployed to both. The sets of log entries were coming from pods in different clusters.
My code was creating a Cosmos DB account client, and was not setting the connection mode, so it was defaulting to direct.
The log entries that showed successful execution were for the cluster running in Azure - in Azure Kubernetes Service (AKS). Accessing the Cosmos DB account from AKS in direct connection mode was succeeding.
The log entries that were failing were running in our on-prem Kubernetes cluster. Attempting to connect to the Cosmos DB account was failing because it's on our corporate network which has security restrictions that were preventing direct connection mode from working.
The exception thrown when attempting to connect from our on-prem cluster was essentially "lost" because it was from a process running on a background thread.
modifying the logic to add a try-catch around the attempt to connect, and passing the exception back to the caller allowed logging the exception related to direct connection mode failing.
Biggest lesson learned: When something "strange" or "odd" or "mysterious" or "unusual" is happening, start looking at your code from the perspective of where it could be throwing an exception that isn't caught - especially if you have background processes!

Create multiple MarkLogic Schedule Task for same module through ml-gradle

I am trying to create multiple instance of application on same marklogic environment. I can able to create all the configurations(users,roles,databases,forests,app servers...) but could not able to schedule individual tasks for separate database with same module path.
When tried to run ml-gradle mldeployApps failing at Tasks creation.
My whole application configuration will depends on from property file. for any APP-NAME a seperate insiance need to be created.
I tried deploying through ml-gradle
The mlDeployTasks is failing as already an task is available for the module path. When try to run secong with new failing as it is not recognizing task database
JSON:
{
"task-enabled":true,
"task-path":"/ext/schedules/monitor.xqy",
"task-root":"/",
"task-type":"daily",
"task-period":1,
"task-start-time": "10:00:00",
"task-database":"%%DATABASE%%",
"task-modules":"%%MODULES_DATABASE%%",
"task-user":"admin",
"task-priority":"normal"
}
ERROR:
Logging HTTP response body to assist with debugging: {"errorResponse":{"statusCode":"500", "status":"Internal Server Error", "messageCode":"MANAGE-INVALID", "message":"MANAGE-INVALID (err:FOER0000): task-database"}}
Error occurred while sending PUT request to /manage/v2/tasks/5389046897270663947/properties?group-id=Default; logging request body to assist with debugging: {
Expectation :
wants to deploy and undeploy whole application including schedules tasks based on APPLICATION-NAME as seperate instance
Actual:
the mlDeployTasks based on the module-path each task is identified with old existing database and fails to create a new task server.
Please suggest me the right way to achieve the same
MarkLogic's Management API is seeing your request as an attempt to change the task-database, but it only allows one property for a scheduled task to change (task-enabled). I think what you'll need to do here is have different task-path values for your different databases. That's not ideal, but if the implementation logic is all in a library that's imported by the task, the different modules themselves will be very lightweight.
Try ml-gradle 3.10.0 - support for this now exists - see the release notes for ml-app-deployer 3.10.0 (which provides most of the functionality in ml-gradle) - https://github.com/marklogic-community/ml-app-deployer/releases/tag/3.10.0

Is it possible to trigger a script execution when stopping a windows service from services.msc?

I want to know if it is possible to configure a service to call a batch/powershell script when I stop it from services.msc.
While in Linux init.d services are fully programmable and even systemd services can have additional procedures I've yet to find a way to accomplish this on Windows.
Thanks in advance
You can configure services to run a program on failure, but if you are stopping the service via services.msc then that likely wouldn't count as a failure.
The only other option I can think of would be to set up a PowerShell script running as a scheduled task that either periodically checks the services running status, or (for a more foolproof option) looks at the event log for events indicating that the service has been stopped (since the last time the script checked) and then performs whatever actions you require.
Per the comment from montonero, you wouldn't need to run the scheduled task periodically as it could be configured to run when the event itself occurs. This is described here: https://blogs.technet.microsoft.com/wincat/2011/08/25/trigger-a-powershell-script-from-a-windows-event/
Use the Event Viewer “Attach Task to This Event…” feature to
create the task.
Launch "Event Viewer" and find the event. Once found, right-click on the event and select "Attach Task to
This Event...".

Logging for Talend job running within spring-boot

We have talend-jobs triggered within Spring-boot application. Is there any way to configure the output of talend-jobs to the application log files?
One workaround we find is to write logs directly to an external file (filePath passed as context-param). But wanted to find if there is a better way to configure this seamlessly.
Not sure if I understood the question correctly, but I guess your concerns might be on what might have happened to the triggered Jobs.
Logging
With Respect to Logging for Talend, You could configure using Log4j,
https://help.talend.com/reader/5DC~TBhDsBie5JTXyVLW4g/QSGCZJKXo~uhKvZDq1DxUg
Monitoring
Regarding the Status of the Job Executed, you could get the execution details retrieved using REST Call(Talend Metaservlet API).
getTaskExecutionStatus
https://help.talend.com/reader/oYf9gKhmYrkWCiSua4qLeg/SLiAyHyDTjuznLR_F~MiQQ
By Modifying the Existing Talend Job,You could also design a like a feedback loop, ie Trigger a REST Call back to your application. With the details of Execution from Talend Job.

PCF Scheduling jobs

I have been trying to schedule spring cloud task via PCF scheduler, however I can't create a job from the app/task (following this documentation on the site - http://docs.pivotal.io/pcf-scheduler/1-1/using-jobs.html)
$ cf apps
name requested state instances memory disk urls
cloud-task stopped 0/1 750M 1G
$ cf services
name service plan bound apps
last operation
my-scheduler scheduler-for-pcf standard cloud-task
create succeeded
$ cf create-job cloud-task my-task-job ".java-buildpa
ck/open_jdk_jre/bin/java org.springframework.boot.loader.JarLauncher"
Creating job ←[33;1mmy-task-job←[0m for ←[33;1mcloud-task←[0m with command ←[33;1m.java-buildpack/open_jdk_jre/bin/java org.springframework.boot.loade
r.JarLauncher←[0m in org ←[33;1mglobal-sales-marketing-customer-experience←[0m / space ←[33;1m141349-dev←[0m as ←[33;1mzzh1bb←[0m
←[31;1mFAILED←[0m
The requested resource was not found.
Not Found
You must create an instance of the scheduler service in this space to use the scheduler service.
Not sure why the job creation command is not able to find the instance of scheduler service - Am I missing something here ?
Also, wondering if there is anything in spring-clould-data-flow that can schedule tasks.
From the output you should be able to create a job in that org/space.
Does the user (zzh1bb?) have SpaceDeveloper privileges? SpaceAdmin should also be sufficient.
Does a cf task execute successfully using:
cf run-task cloud-task ".java-buildpack/open_jdk_jre/bin/java org.springframework.boot.loader.JarLauncher"
And seeing results with:
cf tasks cloud-task
Another diagnostic step might be to check the output of the api calls described here:
http://docs.pivotal.io/pcf-scheduler/1-1/api/#create-job
What version of PCF are you using and what version of the Scheduler for PCF are you using? There were significant changes in the cloud controller api between 1.10, 1.11 and 1.12 that prevent the scheduler service from working across all of those versions.
As far as scheduling SCDF, the Scheduler for PCF service can be used in conjunction with SCDF to allow you to call the task execution endpoint from a Schedeler for PCF call (https://docs.pivotal.io/pcf-scheduler/1-1/using-calls.html).
Call SCDF using the execution endpoint
http://...scdf server.../tasks/executions?name=taskA
doc'ed here:
https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#_launching_a_task_2
This is very useful and convenient especially when creating the SCDF service and the Scheduler for PCF service in the same space.

Resources