Good morning everyone. I hope you'r all keeping safe and staying in home. So my problem is:
I have a nifi project
InvokeHttp is doing the "POST" methode and the generateflow processor contain the body of the post methode
all I need to know is how to make this project runs one time : i.e: REST API needs to create one user I need it to be stopped once a user is created ! It's like running this project one and only one time !
Is it possible ?? how can we do it ?
1111111110sec - Run Schedule
This will make sure to run the processor only once.
enter image description here
Related
We need to deploy our 4 applications (3 spring boot apps and 1 zookeper) with docker stack. As our DevOps guy told us, there is no way how to define in docker stack which application will be depending on another like in docker compose, so we as developers need to solve it in code.
Can you tell me how to do that or what is the best way? One of our applications have to be started as first because that app manage database (migration and so on). Next can start other applications when database is prepared. Any ideas? Thanks.
if you want to run all the 4 applications in one docker container, you can refer to this postRun multiple services in a container
if you want to docker compose the 4 applications, you can refer to this post startup order, it use depends_on your other app images
no matter what the way is, you must write a script to check if your first app has already finish to manage the database, you can refer wait-for-postgres.sh to learn how to use sleep in shell to repeatedly check your first app status
the more precisely way i can suggest one is for example:
put a shared static variable to false
public static boolean is_app_start = false;
when you finish to manage your database, change this value to true;
write a #RequestMapping("/is_app_start") in your controller to return this value
use curl in your shell script to check the value
I am using HangFire hosted by IIS with an app pool set to "AlwaysRunning". I am using the Autofac extension for DI. Currently, when running background jobs with HangFire they are executing sequentially. Both jobs are similar in nature and involve File I/O. The first job executes and starts generating the requisite file. The second job executes and it starts executing. It will then stop executing until the first job is complete at which point the second job is resumed. I am not sure if this is an issue related to DI and the lifetime scope. I tend to think not as I create everything with instance per dependency scope. I am using owin to bootstrap hangfire and I am not passing any BackgroundServer options, nor am I applying any hints via attributes. What would be causing the jobs to execute sequentially? I am using the default configuration for workers. I am sending a post request to web api and add jobs to the queue with the following BackgroundJob.Enqueue<ExecutionWrapperContext>(c => c.ExecuteJob(job.SearchId, $"{request.User} : {request.SearchName}"));
Thanks In Advance
I was looking for that exact behavior recently and I manage to have that by using this attribute..
[DisableConcurrentExecution(<timeout>)]
Might be that you had this attribute applied, either in the job or globally?
Is this what you were looking for?
var hangfireJobId = BackgroundJob.Enqueue<ExecutionWrapperContext>(x => x.ExecuteJob1(arguments));
hangfireJobId = BackgroundJob.ContinueWith<ExecutionWrapperContext>(hangfireJobId, x => x.ExecuteJob2(arguments));
This will basically execute the first part and when that is finished it will start the second part
I have an existing spark-job, the functionality of this spark-job is to connect kafka-server get the data and then storing the data into cassandra tables, now this spark-job is running on server inside spark-2.1.1-bin-hadoop2.7/bin but whenever I am trying to run this spark-job from other location, Its not running, this spark-job contains some JavaRDD related code.
Is there any chance, I can run this spark-job from outside also by adding any dependency in pom or something else?
whenever I am trying to run this spark-job from other location, Its not running
spark-job is a custom launcher script for a Spark application, perhaps with some additional command-line options and packages. Open it, review the content and fix the issue.
If it's too hard to figure out what spark-job does and there's no one nearby to help you out, it's likely time to throw it away and replace with the good ol' spark-submit.
Why don't you use it in the first place?!
Read up on spark-submit in Submitting Applications.
I have two apps running on the same server.
Now it seems like when adding withoutOverlapping() to the scheduler job and managing the base cronjob via cron itself, these 2 apps are blocking each other in execution.
Could that be?
Yes, withoutOverlapping only works per application.
Laravel creates a file in the storage folder with a hash of the job. This way, if the file exists, Laravel knows the job is still running. The one application cannot possibly know if the other one is currently running a job because it does not have access to the storage folder of the other application.
If your code looks like the following
$schedule->command('process:queue 0')->everyMinute()->withoutOverlapping();
$schedule->command('process:queue 1')->everyMinute()->withoutOverlapping();
It is because same commands with different parameters might bc considered overlapping.
I.e. the hash of the job will consider only the command signature.
How to make sure that every 24 hours run script automatically, which compares the current time with the time that is stored in the database, and if the current time is greater then I need to make changes to the database. I use Codeigniter.
CRON Jobs is the solution, but it is not script, it's server based application that runs every given time. You can setup it from hosting panel, but not all of the hosting providers support it :(