I have been trying to schedule spring cloud task via PCF scheduler, however I can't create a job from the app/task (following this documentation on the site - http://docs.pivotal.io/pcf-scheduler/1-1/using-jobs.html)
$ cf apps
name requested state instances memory disk urls
cloud-task stopped 0/1 750M 1G
$ cf services
name service plan bound apps
last operation
my-scheduler scheduler-for-pcf standard cloud-task
create succeeded
$ cf create-job cloud-task my-task-job ".java-buildpa
ck/open_jdk_jre/bin/java org.springframework.boot.loader.JarLauncher"
Creating job ←[33;1mmy-task-job←[0m for ←[33;1mcloud-task←[0m with command ←[33;1m.java-buildpack/open_jdk_jre/bin/java org.springframework.boot.loade
r.JarLauncher←[0m in org ←[33;1mglobal-sales-marketing-customer-experience←[0m / space ←[33;1m141349-dev←[0m as ←[33;1mzzh1bb←[0m
←[31;1mFAILED←[0m
The requested resource was not found.
Not Found
You must create an instance of the scheduler service in this space to use the scheduler service.
Not sure why the job creation command is not able to find the instance of scheduler service - Am I missing something here ?
Also, wondering if there is anything in spring-clould-data-flow that can schedule tasks.
From the output you should be able to create a job in that org/space.
Does the user (zzh1bb?) have SpaceDeveloper privileges? SpaceAdmin should also be sufficient.
Does a cf task execute successfully using:
cf run-task cloud-task ".java-buildpack/open_jdk_jre/bin/java org.springframework.boot.loader.JarLauncher"
And seeing results with:
cf tasks cloud-task
Another diagnostic step might be to check the output of the api calls described here:
http://docs.pivotal.io/pcf-scheduler/1-1/api/#create-job
What version of PCF are you using and what version of the Scheduler for PCF are you using? There were significant changes in the cloud controller api between 1.10, 1.11 and 1.12 that prevent the scheduler service from working across all of those versions.
As far as scheduling SCDF, the Scheduler for PCF service can be used in conjunction with SCDF to allow you to call the task execution endpoint from a Schedeler for PCF call (https://docs.pivotal.io/pcf-scheduler/1-1/using-calls.html).
Call SCDF using the execution endpoint
http://...scdf server.../tasks/executions?name=taskA
doc'ed here:
https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#_launching_a_task_2
This is very useful and convenient especially when creating the SCDF service and the Scheduler for PCF service in the same space.
Related
I need some advice and explanation, according to my case. here is my kogito setup:
kogito service --> dataIndex-postgresql-->Kogito Management Console --> Kogito Task Console.
I create simple BPMN, it is just Task User.
Test scenario:
Service kogito, Console Management and task console Run,Then I submit the workflow until the phases complete in Task Console management.
Service kogito, Console Management and task console Run. Submit the wofkflow then the task success waiting in task console, then i stop the kogito service then run it again the kogito service. the task console will returned error "process instance with id 2493dndnxxx not found. when i try to post the task console.
I don't understand why. I really appreciate if some one can explain for this case, it is normal or not ?.
Thank you
i expect some one can explain this is normal situation or not ?.
in my understanding the process instance Id can submited the task even i stop the kogito service because we have dataIndex with postgresql.
A Kogito service is ephemeral by default, which means any process started will be lost if you restart the service. To maintain the state, you must add one of the persistence add-ons to your Kogito runtime project. See the docs here for more information about the supported persistence types https://docs.kogito.kie.org/latest/html_single/#con-persistence_kogito-developing-process-services.
In this other section,there are also some more details about how that can be combined when using other services like Data Index, which also supports different persistency types: https://docs.kogito.kie.org/latest/html_single/#con-data-index-service_kogito-configuring
I'm doing a migration of my Laravel 8 app to Cloud Run. But I have problem with my schedulers. My Laravel app using Laravel Scheduling so I got 5 tasks :
protected function schedule(Schedule $schedule) {
$schedule->command(Commands\CmdOne::class)->monthlyOn(1, '02:10');
$schedule->command(Commands\CmdTwo::class)->dailyAt('04:00');
$schedule->command(Commands\CmdThree::class)->dailyAt('04:00');
$schedule->command(Commands\CmdFour::class)->dailyAt('05:00');
$schedule->command('activations:clean')->daily();
}
But I think it's risky to place the cron inside the container because Cloud Run can run multiple container instances of my app and I fear about to run the tasks multiple times because my tasks send email to my customers and I want to run them just once.
e.g: if Cloud Run create 5 instances of my container at 05:00Am so the command $schedule->command(Commands\CmdFour::class)->dailyAt('05:00'); will be executed 5 times and I don't want this.
So I see Google Cloud Scheduler and I can expose a web service to run my tasks. But I don't know if it's the good way ? Or there is another way to execute my tasks ? I don't know if removing Laravel Scheduler is the right way.
So if I'm using Cloud Scheduler now, I have to create 5 crons in Cloud Scheduler. I think it's ok for one application but if I have 10 apps (with the same code base but different Cloud run service) it will be hard to manager all these crons because I'll get 5 crons per apps. So in this case 50 crons.
Do you have a better way to manager this ?
If you have the right cache setup (shared by all servers) then you can use the onOneServer() method.
See https://laravel.com/docs/9.x/scheduling#running-tasks-on-one-server
I am trying to execute an api in laravel every minute.
The api's method is GET. However I could not specify the method in the cron.yaml file. Could I use DELETE method here and how? The code should be deployed on google cloud.
I have created a cron.yaml file that has the following format:
cron:
- description: "every minutes job"
url: /deletestories
schedule: every 1 mins
retry_parameters:
min_backoff_seconds: 2.5
max_doublings: 5
I also created the api deletestories that delete rows under specific conditions.
However this isn't working, and when I open google cloud console I could not found any error or any cron job executed.
This cron.yaml file appears to be a Google App Engine cron configuration. If this is correct then only the GET method is supported, you cannot use DELETE.
The GAE cron service itself consists simply of scheduled GET requests that your app needs to handle. From Scheduling Tasks With Cron for Python (the same applies to other languages and to the flexible environment cron as well):
A cron job makes an HTTP GET request to a URL as scheduled. The
handler for that URL executes the logic when it is called.
You also need to deploy your cron.yaml file for it to be effective. You should be able to see the deployed cron configuration in the developer console's Cron Jobs tab under the Task Queues Menu (where you can also manually trigger any of the cron jobs). The performed GET requests for the respective cron jobs should appear in your app's request logs as well, when executed.
I have a sysv style init file for a service being used in centos 7.1
When the system boot up, the systemd generates a service file and it
seems to be enabled for both level 2 and level 3.
I have following questions:
1) Can the service be started twice at each run level ? [How can I prevent
it if it can start]
2) How can I check at which run-level the currently executing service
was started on ?
Thanks
Arvind
This depends on your service. If your service is an active service then starting it will not do anything. You can find if your service is an active service or not by running "systemctl status yourservice.service". In case your service is not active, you can tell systemd to treat it as an active service even after it quits. The directive for this is RemainAfterExit= (https://www.freedesktop.org/software/systemd/man/systemd.service.html#RemainAfterExit=).
To find out which run level your service has been started by you need to look at the "systemctl show yourservice.service" output. Look at what is listed on WantedBy= or RequiredBy= fields.
I am involved in a project which requires me to create a Job Scheduler using “Quartz Scheduler” to schedule various jobs which in turn trigger Pentaho Kettle transformation(s). Kettle transformations are essentially ETL scripts performing some mundane activities in our case. Am facing a critical issue while running the scheduler:
We have around 10 jobs scheduled using Job Scheduler. For some 3 to 4 specific jobs it’s throwing following exception:
Unable to load the job from XML file [/home /transformations/jobs/TestJob.kjb] Unable to read file [file:///home /transformations/jobs/ TestJob.kjb] Could not read from "file:///home /transformations/jobs/TestJob.kjb" because it is a not a file.
org.pentaho.di.job.JobMeta.(JobMeta.java:715)
org.pentaho.di.job.JobMeta.(JobMeta.java:679)
com. XYZ.transformation.jobs.impl.JobBootstrapImpl.executeJob(JobBootstrapImpl.java:115)
com. XYZ.transformation.jobs.impl.JobBootstrapImpl.startJobsExecution(JobBootstrapImpl.java:100)
com. XYZ.transformation.jobs.impl.QuartzJobsScheduler.executeInternal(QuartzJobsScheduler.java:25)
org.springframework.scheduling.quartz.QuartzJobBean.execute(QuartzJobBean.java:86)
org.quartz.core.JobRunShell.run(JobRunShell.java:223)
org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:549)
Weird thing is that, upon verifying the specified path i.e. “/home /transformations/jobs/TestJob.kjb”, file is present and I am able to read it. Moreover the Job runs successfully and does all the things which it is supposed to, yet throws the exception detailed above.
After observing closely, I strongly feel that Quartz is internally caching jobs and/or its parameters. We do load certain parameters required for the job to execute after it is triggered. Would it be possible to delete/purge the cache used by Quartz? I also tried killing all the java processes running on the box (thinking that it may kill Quartz itself, as Quartz is being run within java process) and restarting quartz and its jobs afresh, but couldn’t make it work as expected. It still stores the old parameters somewhere perhaps in some cache.
Versions used –
Spring Framework (spring-core & spring-beans) - 3.0.6.RELEASE
Quertz Scheduler - 1.8.6
Platform – Redhat Linux - 2.6.18-308.el5
Pentaho kettle – Spoon Stable Release – 4.3.0
I will do in this way:
Ensure that the Pentaho Job can run in standalone first with a shell script, java service wrapper or whatever
In the Quartz Job, then use Quartz's NativeJob to call the same standalone script
Just my two cents
Looks to me like you have an extra space in the path.
/home /transformations/jobs/TestJob.kjb
Between the e of home and the /
Remove that space, I can't possibly believe you actually have a home directory called "home "!!