Policy changes to specific processor - apache-nifi

Good afternoon. I'm able to change the global policies for NiFi through REST API, however, I'm trying to edit the access policies for an ARBITRARY processor. I have no idea how to do so. Everything in the NiFi REST API website calls everything else a component (or maybe I'm misinterpreting...)
Anyway, I appreciate all the help/guidance!

The NiFi UI uses the API behind the scenes to perform every action. You can set policies on process groups, remote process groups, processors, funnels, input & output ports, queues, controller services, and reporting tasks. Collectively, these resources are called "components".
If a policy is not set on a specific component, it inherits the policies set on the parent object (i.e. the process group containing it). You can override these policies directly at a granular level.
To set the policy for a specific component, use the POST /policies API. The easiest way to observe the explicit API invocation necessary is to use your browser's developer tools to record the calls made by the UI client while you manually perform the action and then use those API calls.
There are also other tools which make this process easier, such as the official NiFi CLI Toolkit and the (unofficial but very good) NiPyAPI.

Related

System API in mulesoft

I have a requirement to persist some data in a table (single table). The data is coming from UI. Do i need to write just the system API and persist the data OR i need to write process and system API both? I don't see a use of process API in this case. Please suggest. Is it always necessary to access system API through process API or system API can be invoked without process API as well.
I would recommend a fine-grained approach to this. We should be following it through the experience layer even though we do not have must customization to the data.
In short, an experience layer API and directly calling System layer API (if there is no orchestration/data conversion/formatting needed)
Why we need a system API & experience API? A couple of points.
System API should be more attached to the underlying system. And if
in case, in the future, it changes then it should not impact any of
the clients.
Secondly, giving an upper layer gives us the feasibility to add
different SLAs, policies, logging and lots more, to different
clients. Even if you have a single client right now it's better to
architect for the future. Reusing is the key advantage of these APIs.
Please check Pattern 2 in this document
That is a question for the enterprise architect in your organisation. In this case, the process API would probably be a simple proxy for the system API, but that might not always be the case in future. Also, it is sometimes useful to follow a standard architectural pattern even if it creates some spurious complexity in the implementation. As always, there are design trade-offs and the answer will depend on factors that cannot be known by people outside of your organisation.

NiFi: Modify flow without disruption or downtime using Java API

Is there a way to modify NiFi flow dynamically using Java API? The use case is to add a processor to an active data flow (data is flowing through it). The new processor should be added at the beginning of the flow without application disruption or downtime.
In case Java API is not available, please feel free to suggest alternatives. I have already looked at change-nifi-flow-using-rest-api-part-1. Thanks.
Any action you can perform from the UI can also be performed from REST API, the UI is just making calls to the REST API behind the scenes.
I would suggest opening Chrome's Dev Tools and performing the action you are interested in and then seeing what requests were made to perform the action. You can then script these operations however you need.
In addition, if you are trying to deploy flows then you should be taking advantage of NiFi Registry which allows you to place a flow under version control. You can then make changes from your local instance or dev instance, and upgrade the flow in production in-place without stopping your whole NiFi instance.

NiFi: Production usage without web UI

Here are some commonly suggested approaches for using NiFi without web UI, along with their respective limitations. Is there a better way to use NiFi in Production without using web UI while still being able to makes changes to data flow design dynamically?
REST API approach: The REST APIs can be used only with previous knowledge of the ID of the components and do not work with NAME of the components.
MiNiFi approach: The MiNiFi is more focused on collection of data at the source. Additionally, the MiNiFi configuration too is tied to the previous knowledge of ID vs NAME of the components.
A typical NiFi dataflow goes through the following environment lifecycle.
You build your flow in a development NiFi setup. You run it, test it, debug it, fix it.
Once you are sure that the flow runs as expected, promote it to the QA setup and perform similar actions.
Finally when your flow passes QA, promote it to the production setup. Have stringent policies set so that no one expect the support team or the admin have access to make the changes to the flow(s).
In other words, you don't have to rely on the REST API (event the UI changes are done through internal REST API calls) or disable Web UI, if you follow the proper dev-qa-prod promotion.
On a side note, you can leverage NiFi Registry to do the dev-qa-prod lifecycle.

Is there a way to capture Nifi API calls that the UI makes?

Since the Nifi GUI is really making api calls under the hood, is there anyway to capture those requests or logs? I've been using chrome dev tools. Just wondering if there is a way to capture this within nifi for governance purposes.
Chrome Dev tools is the best bet to get the actual API calls.
For auditing purposes there is something a little bit different... from the menu in the top-right there is "Flow Configuration History" which shows every change that has been made to the flow, and who made it (when in a secure instance).
The flow configuration history is also available through the ReportingTask API if you wanted to implement a custom reporting task to push these events somewhere.

Do Different CRM Orgs Running On The Same Box Share The Same App Domain?

I'm doing some in memory Caching for some Plugins in Microsoft CRM. I'm attempting to figure out if I need to be concerned about different orgs populating the same cache:
// In Some Plugin
var settings = Singleton.GetCache["MyOrgSpecificSetting"];
// Use Org specific cached Setting:
or do I need to do something like this to be sure I don't cross contaminate settings:
// In Some Plugin
var settings = Singleton.GetCache[GetOrgId() + "MyOrgSpecificSetting"];
// Use Org specific cached Setting:
I'm guessing this would also need to be factored in for Custom Activities in the AsyncWorkflowService as well?
Great question. As far as I understand, you would run into the issue you describe if you set static data if your assemblies were not registered in Sandbox Mode, so you would have to create some way to uniquely qualify the reference (as your second example does).
However, this goes against Microsoft's best practices in Plugin/Workflow Activity development. Every plugin should not rely on state outside of the state that is passed into the plugin. Here is what it says on MSDN found HERE:
The plug-in's Execute method should be written to be stateless because
the constructor is not called for every invocation of the plug-in.
Also, multiple system threads could execute the plug-in at the same
time. All per invocation state information is stored in the context,
so you should not use global variables or attempt to store any data in
member variables for use during the next plug-in invocation unless
that data was obtained from the configuration parameter provided to
the constructor.
So the ideal way to managage caching would be to use either one or more CRM records (likely custom) or use a different service to cache this data.
Synchronous plugins of all organizations within CRM front-end run in the same AppDomain. So your second approach will work. Unfortunately async services are running in separate process from where it would not be possible to access your in-proc cache.
I think it's technically impossible for Microsoft NOT to implement each CRM organization in at least its own AppDomain, let alone an AppDomain per loaded assembly. I'm trying to imagine how multiple versions of a plugin-assembly are deployed to multiple organizations and loaded and executed in the same AppDomain and I can't think of a realistic way. But that may be my lack of imagination.
I think your problem lies more in the concurrency (multi-threading) than in sharing of the same plugin across organizations. #BlueSam quotes Microsoft where they seem to be saying that multiple instances of the same plugin can live in one AppDomain. Make sure multiple threads can concurrently read/write to your in-mem cache and you'll be fine. And if you really really want to be sure, prepend the cache key with the OrgId, like in your second example.
I figure you'll be able to implement a concurrent cache, so I won't go into detail there.

Resources