Handling usertask in camunda/zeebe - spring-boot

I am new to Camunda/zeebe BPM. I am using the send-email workflow provided by Zeebe to test and play around. I am able to start the process from spring boot application and able to get the instance id information. I want to pass the message content which is defined in the user task using a RestAPI call to my custom spring boot application. It would be great if anyone can point in the right direction on the following
How to pass the variable input to the user task from a spring boot application?
Is there API to get the list of active task for a given instance id (for Zeebe)
Is it possible to make a REST API call to Zeebe directly to provide the input for the human task with help of instance id.
Also can you point some details on how to do with Camunda server instead of Zeebe.

You can define a system task with implementation as delegate expression and set the variable required in the java class. The scope of the variable would be throughout the process instance and is accessible from any user task. If you want you can specifically map variable from the Input/Output tab.
GET http://localhost:8080/engine-rest/task?processInstanceId=100001920009
You have to list the tasks with the api in point 2 then claim the task using task id which will be in the response of point 2
POST http://localhost:8080/engine-rest/task/7676a188-e0f4-11eb-bd9c-22a3117ba565/claim
Then can complete the task with
POST http://localhost:8080/engine-rest/task/7676a188-e0f4-11eb-bd9c-22a3117ba565/complete
if variables are to be passed use the below template as the body
{
"variables": {
"aVariable": {
"value": "aStringValue"
},
"anotherVariable": {
"value": 42
},
"aThirdVariable": {
"value": true
}
},
"withVariablesInReturn": true
}

Related

How to allow clients to hook into preconfigured hooks in application

I have a pretty standard application with a frontend, a backend and some options in the frontend for modifying data. My backend fires events when data is modified (eg. record created, record updated, user logged in, etc.).
Now what I want to do is for my customers to be able to code their own functions and "hook" them into these events.
So far the approaches I have thought of are:
Allowing users in the frontend to write some code in a codeeditor like codemirror, but this whole storing code and executing it with some eval() seems kind of risky and unstable.
My second approach is illustrated below (to the best of my ability at least). The point is that the CRUD API calls a different "hook" web service that has these (recordUpdated, recordCreated, userLoggedIn,...) hook methods exposed. Then the client library needs to extend some predefined interfaces for the different hooks I expose. This still seems doable, but my issue is I can't figure out how my customers would deploy their library into the running "hook" service.
So it's kind of like webhooks, except I already know the exact hooks to be created which I figured could allow for an easier setup than customers having to create their own web services from scratch, but instead just create a library that is then deployed into an existing API (or something like that...). Preferably the infrastructure details should be hidden from the customers so they can focus solely on making business logic inside their custom hooks.
It's kind of hard to explain, but hopefully someone will get and can tell me if I'm on the right track or if there is a more standard way of doing hooks like these?
Currently the entire backend is written in C# but that is not a requirement.
I'll just draft out the main framework, then wait for your feedback to fill in anything unclear.
Disclaimer: I don't really have expertise with security and sandboxing. I just know it's an important thing, but really, it's beyond me. You go figure it out 😂
Suppose we're now in a safe sandbox where all malicious behaviors are magically taken care, let's write some Node.js code for that "hook engine".
How users deploy their plugin code.
Let's assume we use file-base deployment. The interface you need to implement is a PluginRegistry.
class PluginRegistry {
constructor() {
/**
The plugin registry holds records of plugin info:
type IPluginInfo = {
userId: string,
hash: string,
filePath: string,
module: null | object,
}
*/
this.records = new Map()
}
register(userId, info) {
this.records.set(userId, info)
}
query(userId) {
return this.records.get(userId)
}
}
// plugin registry should be a singleton in your app.
const pluginRegistrySingleton = new PluginRegistry()
// app opens a http endpoint
// that accepts plugin registration
// this is how you receive user provided code
server.listen(port, (req, res) => {
if (isPluginRegistration(req)) {
let { userId, fileBytes, hash } = processRequest(req)
let filePath = pluginDir + '/' + hash + '.js'
let pluginInfo = {
userId,
// you should use some kind of hash
// to uniquely identify plugin
hash,
filePath,
// "module" field is left empty
// it will be lazy-loaded when
// plugin code is actually needed
module: null,
}
let existingPluginInfo = pluginRegistrySingleton.query(userId)
if (existingPluginInfo.hash === hash) {
// already exist, skip
res.writeHead(200, { 'Content-Type': 'text/plain' });
res.end('ok');
} else {
// plugin code written down somewhere
fs.writeFile(filePath, fileBytes, (err) => {
pluginRegistrySingleton.register(userId, pluginInfo)
res.writeHead(200, { 'Content-Type': 'text/plain' });
res.end('ok');
})
}
}
})
From the perspective of hook engine, it simply opens a HTTP endpoint to accept plugin registration, agnostic to the source.
Whether it's from CI/CD pipeline, or plain web interface upload, it doesn't matter. If you have CI/CD setup for your user, it is just a dedicated build machine that runs bash scripts isn't it? So just fire a curl call to this endpoint to upload whatever you need. Same applies to web interface.
How we would execute plugin code
User plugin code is just normal Node.js module code. You might instruct them to expose certain API and conform to your protocol.
class HookEngine {
constructor(pluginRegistry) {
// dependency injection
this.pluginRegistry = pluginRegistry
}
// hook
oncreate(payload) {
// hook call payload should identify the user
const pluginInfo = this.pluginRegistry.query(payload.user.id)
// lazy-load the plugin module when needed
if (pluginInfo && !pluginInfo.module) {
pluginInfo.module = require(pluginInfo.filePath)
}
// user plugin module is just normal Node.js module
// you load it with `require`, and you call what ever function you need.
pluginInfo.module.oncreate(payload)
}
}

Spring & Activiti workflow: Get all informations from a Task

working on a simple project using Activiti utility, I just started using it don't know much about it but I want to retrieve all the information from a Task, I did an example but it returns just the id and the name while I need more informations to be returned.
This is what I tried:
public List<Task> getTasks(String assignee) {
return taskService.createTaskQuery().taskAssignee(assignee).list();
}
This piece of code returns me this result:
But I need more inforamtions like Assigne, task properties ... Any help

Provision custom-named SQS Queue with PCF Service Broker

I'm trying to create a new queue, but when using
cf create-service aws-sqs standard my-q
the name of the queue in AWS is automatically assigned and is just an id composed of random letters and numbers.
This is fine when using the normal java client. However, we want to use spring-cloud-aws-messaging (#SqsListener annotation), because it offers us deletion policies out of the box, and a way to extend visibility, so that we can implement retries easily.
#SqsListener(value = "my-q", deletionPolicy = SqsMessageDeletionPolicy.ON_SUCCESS)
public void listen(TestItem item, Visibility visibility) {
log.info("received message: " + item);
//do business logic
//if call fails
visibility.extend(1000);
//throw exception
//if no failure, message will be dropped
}
The queue name on the annotation is declared, so we can't change it dynamically after reading the VCAP_SERVICE environment variable injected by PCF on the application.
The only alternative we can think of is use reflection to set accessibility on value of the annotation, and set the value to the name on the VCAP_SERVICE, but that's just nasty, and we'd like to avoid it if possible.
Is there any way to change the name of the queue to something specific on creation? This suggests that it's possible, as seen below:
cf create-service aws-sqs standard my-q -c '{ "CreateQueue": { "QueueName": “my-q”, "Attributes": { "MaximumMessageSize": "1024"} } }'
However, this doesn't work. It returns:
Incorrect Usage: Invalid configuration provided for -c flag. Please
provide a valid JSON object or path to a file containing a valid JSON
object.
How do I set the name on creation of the queue? Or the only way to achieve my end goal is to use reflection?
EDIT: As pointed out by Daniel Mikusa, the double quotes were not real double quotes, and that was causing the error. The command is successful now, however it doesn't create the queue with the intended name. I'm now wondering if this name needs to be set on bind-service instead. The command has a -c option too but I cannot find any documentation to support which parameters are available for a aws-sqs service.

How to create Apis on Spring, that are using up to date environment values

I want to create API, which takes part of its input from environment (urls etc..). Basically a Jar file.
I also wan't the values being auto updated when application.properties changes. When there is change, this is called:
org.springframework.cloud.endpoint.RefreshEndpoint#refresh
However, I consider it bad practice to have magic env variable keys like 'server.x.url' in the Api contract between client application and the jar. (Problem A)
That's why I'd like to use Api like this. But there's problem of old values.
public class MyC {
TheAPI theApi=null;
void MyC(){
theApi = new TheApi();
theApi.setUrl( env.get("server.x.url") );
}
doStuff() {
theApi.doStuff(); // fails as theApi has obsolete value of server.x.url, Problem B
}
So either I have ugly API contract or I get obsolete values in the API calls.
I'm sure there must be Spring way of solving this, but I cant get it to my head just now.

Is it Possible to have more than one messages file in Play framework

We have a site which will be used for two different clients. During first request the user will be asked to choose a client. Based on that text,labels and site content should be displayed.
Is it possible to have two messages file in Play framework and during session startup the messages file would be decided
As of my research we can have more than a file for each Locale, the messages will be get based on locale in the request.
No, it is not supported at the moment.
You can easily do that either in a plugin(Look at MessagesPlugin ) or even using a bootstrap job with the #onApplicationStartup annotation
// From MessagesPlugin.java
//default languange messages
VirtualFile appDM = Play.getVirtualFile("conf/messages");
if(appDM != null && appDM.exists()) {
Messages.defaults.putAll(read(appDM));
}
static Properties read(VirtualFile vf) {
if (vf != null) {
return IO.readUtf8Properties(vf.inputstream());
}
return null;
}
You can wrote you own PlayPlugin and handle implement play.PlayPlugin.getMessage(String, Object, Object...). Then you could choose the right file. The class play.i18n.Messages can be used as inspiration how to implement the method.
Solved this problem with below solution,
Created a class MessagesPlugIn which extends play.i18n.MessagesPlugin
Created a class Messages as like play.i18n.Messages
Had a static Map messaagesByClientID in Messages.java
Overridden onApplicationStart() in MessagesPlugIn
Loaded the Properties in messaagesByClientID as locales loaded in play.i18n.MessagesPlugin
Had a method get() in Messages.java, retrieve the property from messaagesByClientID based ClientId in the session. If the property is not available call get() in play.i18n.Messages
7.Created a Custom tag il8nTag and its used in HTML templates. il8nTag will invoke the methos in Messages.get().
Create your own Module based on play.api.i18n.I18nModule, but bound to your own implementation of MessagesApi, based on Play's DefaultMessagesApi (here is the part defining the files to load)
Then in your application.conf, disable Play's play.api.i18n.I18nModule and enable your own module.

Resources