What APIs can be used inside transaction processor? - hyperledger-composer

I have questions related what APIs are available inside Hyperledger Composer's transaction processor. It looks like, the sample only provides obtain assetRegistry, and then registry.update(). I expect that transaction processor is what we call SmartContract. Suppose the transaction is supposed to change the owner. So I want to verity update such that, new owner exists. I wonder if I can use participantRegistry.get() operation inside transaction processor. I checked that resolve() function is not available. So I suspect, transaction processor provides APIs available on composer runtime. But there is no documentation as to what kind of APIs are available for transaction processor.

The runtime APIs are in the runtime module. However, the runtime modules appear to have been removed from the publication of the API on the website. Please create an issue for that.

Related

Apache Nifi Custom Processor dependency on another processor

As per my requirement I need to create a Nifi custom processor which will do structuring of message and then write to Splunk.
I am following below link for creating custom processor but not clear on how to make use of invokeHttpProcessor/putSplunk processor within custom processor code. Any suggestion is appreciated
https://help.syncfusion.com/data-integration/how-to/create-a-custom-processor
In general the actual processor implementations like InvokeHttp and PutSplunk are not meant to be subclassed as-is, if there is code that should be available for reuse among processors, please feel free to reach out to the community (via mailing list for example) and we can discuss moving such code out to an API.
In the meantime, I'm not sure it would work to put the implementation NAR as a parent of the custom processor NAR but you can try that, it perhaps will let you subclass the implementations, but it is not recommended.
An alternative is to just copy the code from the processor(s) you want and use that duplicate code directly in your custom processor. There are some maintainability changes there of course, but if you encapsulate your custom processor away from the duplicated NiFi processor, you would just need to keep an eye out for any changes made to the NiFi processor and update your copy accordingly.

Policy changes to specific processor

Good afternoon. I'm able to change the global policies for NiFi through REST API, however, I'm trying to edit the access policies for an ARBITRARY processor. I have no idea how to do so. Everything in the NiFi REST API website calls everything else a component (or maybe I'm misinterpreting...)
Anyway, I appreciate all the help/guidance!
The NiFi UI uses the API behind the scenes to perform every action. You can set policies on process groups, remote process groups, processors, funnels, input & output ports, queues, controller services, and reporting tasks. Collectively, these resources are called "components".
If a policy is not set on a specific component, it inherits the policies set on the parent object (i.e. the process group containing it). You can override these policies directly at a granular level.
To set the policy for a specific component, use the POST /policies API. The easiest way to observe the explicit API invocation necessary is to use your browser's developer tools to record the calls made by the UI client while you manually perform the action and then use those API calls.
There are also other tools which make this process easier, such as the official NiFi CLI Toolkit and the (unofficial but very good) NiPyAPI.

Hyperledger fabric - Can I call external system during validation or Endorsement Phase

We have a use case where a transaction validation logic is quite complex and requires data from different sources, in order to validate a transaction.
Query Can we call and external rest service to validate certain data from hyperledger fabric, using its pluggable validation feature ?
Making an external api call from hyperledger fabric smart contract is technically possible, it is a risky idea for several reasons:
1) chaincode must be deterministic, and the problem with 'enriching' a transaction using an external API is that it must return the same result running anywhere in a business network, which may very well be running globally, so you need to trust that the answers will all be the same within a time windows that is quite a bit wider than a few ms
2) running just one endorser in development and production gets you around that problem, but weakens consensus a bit, and makes it essentially impossible to prove determinism for any given transaction
3) designing to such a weakened system is not a good idea, since inevitably someone will realize that the endorsement policy should be stronger and you go right back to the issues in point 1
One way around this issue is to use a distributed external API with versioned data (and you might need to write an oracle to provide this facility on top of an API that is not versioning its data) such that all endorsers store the external data's current version in the asset repository in world state as well. This makes certain that the data read is identical and accounts for delays in propagation in the oracle network. The presence of the API data version in the final asset data in world state (more accurately in the read/write set for the transaction) ensures that different versions of data in different regions in the oracle (e.g. propagation delays) will fail any multi-endorsement policy. Of course, a client designed in such an environment is free to resubmit a transaction for endorsement to get consensus.

Do Different CRM Orgs Running On The Same Box Share The Same App Domain?

I'm doing some in memory Caching for some Plugins in Microsoft CRM. I'm attempting to figure out if I need to be concerned about different orgs populating the same cache:
// In Some Plugin
var settings = Singleton.GetCache["MyOrgSpecificSetting"];
// Use Org specific cached Setting:
or do I need to do something like this to be sure I don't cross contaminate settings:
// In Some Plugin
var settings = Singleton.GetCache[GetOrgId() + "MyOrgSpecificSetting"];
// Use Org specific cached Setting:
I'm guessing this would also need to be factored in for Custom Activities in the AsyncWorkflowService as well?
Great question. As far as I understand, you would run into the issue you describe if you set static data if your assemblies were not registered in Sandbox Mode, so you would have to create some way to uniquely qualify the reference (as your second example does).
However, this goes against Microsoft's best practices in Plugin/Workflow Activity development. Every plugin should not rely on state outside of the state that is passed into the plugin. Here is what it says on MSDN found HERE:
The plug-in's Execute method should be written to be stateless because
the constructor is not called for every invocation of the plug-in.
Also, multiple system threads could execute the plug-in at the same
time. All per invocation state information is stored in the context,
so you should not use global variables or attempt to store any data in
member variables for use during the next plug-in invocation unless
that data was obtained from the configuration parameter provided to
the constructor.
So the ideal way to managage caching would be to use either one or more CRM records (likely custom) or use a different service to cache this data.
Synchronous plugins of all organizations within CRM front-end run in the same AppDomain. So your second approach will work. Unfortunately async services are running in separate process from where it would not be possible to access your in-proc cache.
I think it's technically impossible for Microsoft NOT to implement each CRM organization in at least its own AppDomain, let alone an AppDomain per loaded assembly. I'm trying to imagine how multiple versions of a plugin-assembly are deployed to multiple organizations and loaded and executed in the same AppDomain and I can't think of a realistic way. But that may be my lack of imagination.
I think your problem lies more in the concurrency (multi-threading) than in sharing of the same plugin across organizations. #BlueSam quotes Microsoft where they seem to be saying that multiple instances of the same plugin can live in one AppDomain. Make sure multiple threads can concurrently read/write to your in-mem cache and you'll be fine. And if you really really want to be sure, prepend the cache key with the OrgId, like in your second example.
I figure you'll be able to implement a concurrent cache, so I won't go into detail there.

Caching and AOP in Mendix: is there a uniform or standardized approach for server-side caching within a Mendix application?

Using the Mendix Business Modeler to build web-applications is fundamentally different than developing web-applications using technologies like Java/Spring/JSF. But, I'm going to try to compare the two for the sake of this question:
In a Java/Spring based application, I can integrate my application with the 3rd party product Ehcache to cache data at the method level. For example, I can configure ehcache to store the return value for a given method (with a specific time-to-live). Whenever this method is called, ecache will automatically check if the method has been called previously with the same parameters and if there is a stored return value in the cache. If so, the method is never actually executed and instead the cached method return value is immediately returned.
I would like to have the same capabilities within Mendix, but in this case I would be caching Microflow return values. Also, I don't want to be forced to add actions all over the place explicitly telling the Microflow to check the cache. I would like to register my Microflows for caching in one centralized place, or simply flag each Microflow for being cached. In other words, this question is just as much about the concept of aspect-oriented-programming (AOP) in Mendix as it is about caching: is there a way to get hooks into Microflow invocation so I can apply pre and post execution operations? In my opinion the same reasons why AOP has it's place an purpose in Java exist in Mendix.
When working with the Mendix application it tries to do as much for you as possible, in this case that means that the platform already has an object cache to keep all objects that need caching.
Internally the Mendix platform uses Ehcache to do that.
However it is not really possible to influence that cache as you would normally do in Java/Spring.This is due to all the functionality of the Mendix Platform, that already tries to cache all objects as efficiently as possible.
Every object you create is always added to the cache. When working with that object it stays in cache until the Platform detects that the specific object can no longer be accessed either through the UI or a microflow.
There are also API calls available that instruct the platform to retain the object in cache regardless of it's usage. But that doesn't provide you with the flexibility as you asked for.
But specifically on your question, my initial response would be: Why would you want to cache a microflow output?
Objects are already cached in memory, and the browser client only refreshes the cache when instructed. Any objects that you are using will be cached.
Also when looking at most of the microflows that we use, I don't think it is likely that I would want to cache the output instead of re-running the microflows. Due to the design of the majority of the microflows I think it is likely that most microflows can return a slightly different output every time you execute it.
There are many listener classes you can subscribe to in the Mendix platform that allow you to trigger something in addition to the default action. But that would require some detailed knowledge of the current behavior.
For example you can override the login action, but if you don't perform all the correct validations you could make the login process less secure.

Resources