I am completely new to Spring and Activiti, and made myself a little Project which works just fine. there are 4 service tasks, a REST controller, 1 process, 1 service and 4 methods in this service.
When i call the server endpoint, i start my process and it just goes step by step through my service tasks and calls service.method as defined in the expression ${service.myMethod()}.
BUT, what i really need is a workflow that stops after a servicecall and waits till there is another request sent, similar to a user task waiting for input, the whole process should pause till i send a request to another endpoint.
like myurl:8080/startprocess, and maybe the next day myurl:8080/continueprocess. Maybe even save some data for continued use.
is there a simple predefined way to do this?
Best regards
You can use human tasks for that or use a "signal intermediate catching event" (see activiti´s user guide) after each ativity.
When you do that, the first rest call will start a new process instance that will execute your flow´s activities until it reaches a the signal element. When this happens, the engine saves its current state and returns control to the caller.
In order to make you flow progress, you have to send it a "signal", something you can do with an API call or using the rest API (see item 15.6.2 in the guide)
Related
I am working with Activiti 5 REST API interfaceintegrated with Spring Boot Activiti Starterand I am trying to complete a process instance. I was able to instantiate a process definition, walk through the process instance tasks and complete each of them. It correctly works until the end of the process, when there are no pending tasks left. I would expect the process instance to be completed - i.e. completed: true-, as I have an end event (terminateEventDefinition), but it is not.
I could not find the REST Api to complete the process instance. So, what's the correct way of managing process instance completion?
Thanks.
Perhaps I am missing something, but after the last task is completed the process will end normally and no longer visible in the /runtime/process-instances list.
Now, you mention that you complete the instance with a Terminate End Event, Terminate End events will complete the instance but will not set the "complete" flag. Terminate is typically used for cancelling a running process.
Instead of terminate, you should use a regular end event, this should set the complete flag.
Again, maybe I am missing somethign in your description.
Thanks
Greg
I have a C# console app that processes a queue (in db) and processes user uploaded contents.
It is running as a scheduled task.
If some user drops some content, the process won't pick it up until it is time for it to run.
If the job runs once every 5 minutes then it has to wait for max 5 minutes to run.
I want to process user content right away.
If user1 drops content and then user2 drops content after 30 seconds, I want 2 instances of my job running.
Is it possible to trigger the job\task to run from the C# code (MVC controller)?
Essentially it sounds like you're just looking to perform an asynchronous operation. Depending on the version of .NET, there are a number of options. For example, the Task Parallel Library is a simple way to invoke an asynchronous operation. So if the task is encapsulated into some object (for the same of example, let's say a method called Process on an object called content) then it might look like this:
var processContent = new Task(() => content.Process());
processContent.Start();
The task would then continue asynchronously and flow control would return to the application (which is particularly ideal in a web application where you don't want a user looking at an unresponsive browser, naturally).
If you're using .NET 4.5, you can also make use of the async and await keywords to perhaps make it a little more clear. At its heart, you're talking about running a process in a separate thread.
This all assumes that the web application and the console application share the same back-end business logic code. Which, of course, is probably what they should be doing. If for some reason that's not an option (and I recommend looking into making it an option), you can start a process from code. It could be something as simple as:
Process.Start("C:\Path\To\ConsoleApplication.exe");
Though forking all of these processes can look pretty messy and might be very difficult to manage in terms of error handling/logging/etc. Another reason why it's best to keep the logic in the same process as the application, just in a separate thread.
A web application I am developing needs to perform tasks that are too long to be executed during the http request/response cycle. Typically, the user will perform the request, the server will take this request and, among other things, run some scripts to generate data (for example, render images with povray).
Of course, these tasks can take a long time, so the server should not hang for the scripts to complete execution before sending the response to the client. I therefore need to perform the execution of the scripts async, and give the client a "the resource is here, but not ready" and probably tell it a ajax endpoint to poll, so it can retrieve and display the resource when ready.
Now, my question is not relative to the design (although I would very much enjoy any hints on this regard as well). My question is: does a system to solve this issue already exists, so I do not reinvent the square wheel ? If I had to, I would use a process queue manager to submit the task and put a HTTP endpoint to shoot out the status, something like "pending", "aborted", "completed" to the ajax client, but if something similar already exists specifically for this task, I would mostly enjoy it.
I am working in python+django.
Edit: Please note that the main issue here is not how the server and the client must negotiate and exchange information about the status of the task.
The issue is how the server handles the submission and enqueue of very long tasks. In other words, I need a better system than having my server submit scripts on LSF. Not that it would not work, but I think it's a bit too much...
Edit 2: I added a bounty to see if I can get some other answer. I checked pyprocessing, but I cannot perform submission of a job and reconnect to the queue at a later stage.
You should avoid re-inventing the wheel here.
Check out gearman. It has libraries in a lot of languages (including python) and is fairly popular. Not sure if anyone has any out of the box ways to easily connect up django to gearman and ajax calls, but it shouldn't be do complicated to do that part yourself.
The basic idea is that you run the gearman job server (or multiple job servers), have your web request queue up a job (like 'resize_photo') with some arguments (like '{photo_id: 1234}'). You queue this as a background task. You get a handle back. Your ajax request is then going to poll on that handle value until it's marked as complete.
Then you have a worker (or probably many) that is a separate python process connect up to this job server and registers itself for 'resize_photo' jobs, does the work and then marks it as complete.
I also found this blog post that does a pretty good job summarizing it's usage.
You can try two approachs:
To call webserver every n interval and inform a job id; server processes and return some information about current execution of that task
To implement a long running page, sending data every n interval; for client, that HTTP request will "always" be "loading" and it needs to collect new information every time a new data piece is received.
About second option, you can to learn more by reading about Comet; Using ASP.NET, you can do something similiar by implementing System.Web.IHttpAsyncHandler interface.
I don't know of a system that does it, but it would be fairly easy to implement one's own system:
create a database table with jobid, jobparameters, jobresult
jobresult is a string that will hold a pickle of the result
jobparameters is a pickled list of input arguments
when the server starts working on a job, it creates a new row in the table, and spwans a new process to handle that, passing that process the jobid
the task handler process updates the jobresult in the table when it has finished
a webpage (xmlrpc or whatever you are using) contains a method 'getResult(jobid)' that will check the table for a jobresult
if it finds a result, it returns the result, and deletes the row from the table
otherwise it returns an empty list, or None, or your preferred return value to signal that the job is not finished yet
There are a few edge-cases to take care of so an existing framework would clearly be better as you say.
At first You need some separate "worker" service, which will be started separately at powerup and communicated with http-request handlers via some local IPC like UNIX-socket(fast) or database(simple).
During handling request cgi ask from worker state or other data and replay to client.
You can signal that a resource is being "worked on" by replying with a 202 HTTP code: the Client side will have to retry later to get the completed resource. Depending on the case, you might have to issue a "request id" in order to match a request with a response.
Alternatively, you could have a look at existing COMET libraries which might fill your needs more "out of the box". I am not sure if there are any that match your current Django design though.
Probably not a great answer for the python/django solution you are working with, but we use Microsoft Message Queue for things just like this. It basically runs like this
Website updates a database row somewhere with a "Processing" status
Website sends a message to the MSMQ (this is a non blocking call so it returns control back to the website right away)
Windows service (could be any program really) is "watching" the MSMQ and gets the message
Windows service updates the database row with a "Finished" status.
That's the gist of it anyways. It's been quite reliable for us and really straight forward to scale and manage.
-al
Another good option for python and django is Celery.
And if you think that Celery is too heavy for your needs then you might want to look at simple distributed taskqueue.
I have noticed that some of my ajax-heavy sites (ones I visit, not ones I have built), have certain auto-refresh features. For example, in GMail, if I get a new message, I see the new message without a page reload. It's the same with the Facebook browser-based IM client. From what I can tell, there aren't any java applets handling the server-browser binding, so I'm left to assume it's being done by AJAX and perhaps some element I'm unaware of. So by my best guess, it's done in one of two ways:
The javascript does a steady "ping" to a server-side script, checking for any updates that might be available (which would explain why some of these pages bring any other heavy-duty pages to a crawl). or
The javascript sits idly by and a server-side script actually "Pushes" any updates to the browser. But I'm not sure if this is possible. I'd imagine there is some kind of AJAX function that still pings, but all it simply asks "any updates?" and the server-script has a simple boolean that says "nope" or "I'm glad you asked." But if this is the case, any data changes would need to call the script directly so that it has the data changes ready and makes the change to that boolean function.
So is that possible/feasible/how it works? I imagine something like:
Someone sends an email/IM/DB update to the server, the server calls the script using the script's URL plus some relevant GET variable, the script notes the change and updates the "updates available" variable, the AJAX gets the response that there are in fact updates, the AJAX runs its normal "update page" functions, which executes the normal update scripts and outputs them to the browser.
I ask because it seems really inefficient that the js is just doing a constant check which requires a) the server to do work every 1.5 seconds, and b) my browser to do work every 1.5 seconds just so that on my end I can say "Oh boy, I got an IM! just like a real IM client!"
Read about Comet
I've actually been working on a small .NET Web App that uses the Ajax with long polling technique described.
Depending on what technology you're using, you could use thread signaling mechanisms to hold your request until an update is retrieved.
With ASP.NET I'm running my server on a single machine, so I store a reference to my Producer object (which contains a thread that processes the data). To initiate the data pull, my service's Subscribe method is called, which creates a Consumer object that's registered with the Producer. If the Consumer is long polling mode, it has a AutoResetEvent which is signaled whenever it receives new data, and whenever the web client makes a request for data, the Consumer first waits on the reset event, and then returns it.
But you're mentioning something about PHP - as far as I know persistence is maintained through serialization, not actually keeping the object in memory, so I don't know how you could reference a Producer object using $_CACHE[] or $_SESSION[]. When I developed in PHP I never really knew anything about multithreading so I didn't play around with it, but I guess you can look into that.
Using infinite loops is going to consume a lot of your processing power - I would exhaust all other options first.
The workflow is being published as a wcf service, and I need to guarantee that workflows execute sequentially. Is there a way--in code or in the config--to guarantee the runtime doesn't launch two workflows concurrently?
There is no way to configure the runtime to limit the number of workflows in progress.
Consider though that its the responsibility of the workflow itself to control flow. Hence the workflow itself should have means to determine if another instance of itself is currently in progress.
I would consider creating an Activity that would transactionally attempt to update a DB record to the effect that an instance of this workflow is in progress. If it finds that another is currently in progress it could take the appropriate action. It could fail or it could queue itself using an EventActivity to be alerted when the previous workflow has completed.
You probably will need to check at workflow start for another running instance.
If found, cancel it.
I don't agree that this needs to be handled at the WorkflowRuntime level. I like the idea of a custom Activity, sort of a MutexActivity that would be a CompositeActivity that has a DB backend. The first execution would log to the database it has a hold of the mutex. Subsequent calls would queue up their workflow IDs and then go idle. When the MutexActivity completes, it would release the Mutex, load up the next workflow in the queue and invoke the contained child activities.