What causes version of NiFi component to update? - apache-nifi

While creating a NiFi flow I'm realizing the versions of the components changing.
I understand that the version changes each time the component updates - but what is considered an update of a component?
For example, what causes an update in a connection's version?
I'm trying to find some pattern but with not much luck.
Thanks in advance!

The official documentation states that you can have multiple versions of your flow at the same time:
You have access to information about the version of your Processors, Controller Services, and Reporting Tasks. This is especially useful when you are working within a clustered environment with multiple NiFi instances running different versions of a component or if you have upgraded to a newer version of a processor.
You can opt-out of versioning all together:
Methods of disable the versioning:
NiFi UI: To change the version of a flow, right-click on the versioned process group and select Version→Change version (link).
Rest API: Send an http DELETE request to /versions/process-groups/{id} with the appropriate ID.
You can also use Toolkit CLI to view available versions, by executing ./bin/cli.sh registry diff-flow-versions (link).

Related

Why version of existing camunda workflows gets updated by adding a new workflow?

I’m facing a situation where adding a new bpmn file to resources folder is making version of existing workflows to increase by 1. Is this the expected behaviour of camunda?
Eg:
I had three workflows initially, Workflow_1.bpmn, Workflow_2.bpmn, Workflow_3.bpmn(all three deployed in one go). Once I tried to add another workflow file Workflow_4.bpmn, version of each of workflows 1, 2 and 3 increased by 1 i.e. after deployment version of existing workflows became 2 and Workflow_4.bpmn was deployed with version 1. What should be done to prevent updation of existing workflows.
Is there any way I can identify that if there is no change in existing workflow then on which ground camunda identified them as a new version of workflow and done required changes in the version.
I’m already aware that we can download different versions of workflow file from deployment page of cockpit itself and also by using below api to compare if there is any change in workflow files.
http://localhost:8080/engine-rest/process-definition/{processDefinitionId}/xml
So, I’ve already downloaded and compared the existing workflow files with new version and I found there was no change, then why camunda updated the version number of existing workflows?
Framework: Spring Boot

Node (maven) to deploy the application to several environments

On Jelastic, I created a node for building an application (maven), there are several identical environments (NGINX + Spring Boot), the difference is in binding to its database and configured SSL.
The task is to ensure that after building the application (* .jar), deploy at the same time go to these several environments, how to implement it?
When editing a project, it is possible to specify only one environment, multi-selection is not provided.
it`s allowed to specify just one environment
We suggest creating a few environments using one Repository branch, and run updates by API https://docs.jelastic.com/api/#!/api/environment.Vcs-method-Update pushing whole code to VCS.
It's possible to use CloudScripting technology for attaching custom logic to onAfterBuildProject event and deploying the project to additional environments after build is complete. Please check this JPS as an example of the code syntax. Most likely you will need to use DeployProject API method.

Please migrate off JSON-RPC and Global HTTP Batch Endpoints - Dataflow Template

I received an email with the Title ^ as the subject. Says it all. I'm not directly using the specified endpoint (storage#v1). The project in question is a postback catcher that funnels data into BigQuery
App Engine > Pub Sub > Dataflow > Cloud Storage > BigQuery
Related question here indicates Dataflow might be indirectly using it. I'm only using the Cloud PubSub to GCS Text template.
What is the recommended course of action if I'm relying on a template?
I think the warning may come from a dataflow job which uses the old version of storage API. Please upgrade Dataflow/Beam SDK version beyond 2.5.
Since you're using our PubsubToText template. The easiest way to to it would be:
Stop your pipeline. Be sure to select "Drain" when asked.
Relaunch the pipeline using the newest version (which is automatically done if you're using UI), from the same subscription.
Check the SDK version. It should be at least 2.7.
After that you should not see any more warnings.

Multitenency in Apache Nifi

I am working on a cloud based application using Apache Nifi, for this we required to support Multitenency. But the current Nifi implementation only supports role based access for users, for a single flow.
I could understand that the flow state is saved as a single compressed XML file for a Nifi instance. So that who ever logins into that instance can view the same flow. Our requirement is to create unique flows for each user login. I tried to replicate state saving gz XML file for each users, but couldn't succeeded as the FlowService/FlowController which loads the XML file, is instantiated at the application startup and they are singleton. Please correct me, if Iam wrong with this approach. Or is there any other solution for adding Multitenant support with Nifi. I also wonder the reason behind the Nifi as a single user application.
Multi-tenant support will be introduced in Apache NiFi 1.0.0. There is a BETA release available [1]. This will support assigning permissions on a per component basis. However, the different tenants still share a canvas. There has been discussions of introducing a workspace concept that could provide visually separate dataflows.
[1] https://nifi.apache.org/download.html

Managing custom module releases

I am looking for any best practices and/or recommendations around how best to manage releases for custom modules in a production environment running on the Spring-XD platform.
Specifically, if I have a custom module foo-1.0.0 deployed into a farm of xd containers and I wish to rev it to version foo-1.1.0. What are my alternatives? I gather the following might work (from looking at other questions and docs):
Assuming a shared filesystem/directory for each server/container the custom module jar can be replaced and the container will pick up the new version without a need to restart the server. Will this work? Does this mean the jar name needs to be the same or will it work with versioned named jars?
Maintain a duplicate/mirrored container envs so that one set of containers can be updated by properly removing the stream/job/modules and then bring up the environment up with the updated module version etc... (though this is expensive from a hardware perspective) basically doing a rolling upgrade of sorts
Any other ways?
An ancillary question might be about how easy is it to expose the version of the custom module being used by a given container?
Any thoughts would be appreciated.
Thanks,
Mark

Resources