Elsa Workflow Steps to call .net assembly/code - elsa-workflows

I have been exploring ELSA workflows for one of our use case and wondering if following is possible:
I'm looking for simplified approach where I can feed the work flow steps in Json/yaml format to workflow builder and within those steps provide pointers to call own .net assembly (or inline .net code). pass on object (by ref) across all the steps and accumulate the result until end of workflow.

Related

Why dataweave over template engines like Velocity/Freemarker/Thymeleaf

I see a broad adoption of Dataweave which I feel is more of transformation library just like Freemarker or Velocity.
In case of DW Change in transformation logic would need change in code, the very same purpose template engines got popular at the first place to seperate logic and code so that we can change transformation logic without needing to rebuild/repackage our code (more deployment hassle).
Can anyone help me to point out few reasons as to why one would prefer DW .
TLDR: If you're looking for a template engine for things like static websites, DataWeave definitely isn't the right choice. Use the right tool for the job. Also, while you can use DataWeave outside of Mule, I don't think I've seen anyone adopt DataWeave that hasn't adopted MuleSoft..
A few things to consider (and most of these I'm stating in the context of developing Mule applications):
These template engines are, typically, for outputting static text. If you're using it to output structured data rather than something like an HTML page.. you're probably doing it wrong. They aren't going to return structured data - they are going to return text. If you're at the very end of your flow and you're going to output that back out of the API or to a file, you're fine I suppose.. but if you want to actually be able to work with that output, you're going to have to convert the plain text to an actual object... introducing a lot of extra steps in this process when you could have just used DataWeave in the first place. Dataweave is especially beneficial when you want to do things like streaming because you're processing large payloads. Dataweave can understand JSON, XML, and CSV (the three most common data types I see) in a streamed format without any additional work, making it very easy to create efficient applications. The big difference between a template engine and a data transformation language is that one is for outputting text using structured data as input, and the other is for working with structured data on the input and outputting structured data that you can continue to work with. There is a reason that almost all of the template engine docs talk about building websites and not things like integrations.
The DataWeave engine is, as Aled indicated, built into the Mule runtime. Deeply so. You can use DataWeave in any field in any connector by default, even fields that don't have the f(x) button - because it's built into the runtime. This makes DataWeave what you could consider a first-class citizen within Mule, unlike something you will only be able to utilize either via connectors or by invoking java bridges/libraries.. which you do via DataWeave or a long series of connector operations.
The benefits you listed are also not things you can't do with DataWeave. You can VERY easily templatize and externalize DataWeave - for example, I have several DataWeave libraries in my maven repo I can include as dependencies. I've built several transformation services that use databases with DataWeave in order to do transformation, allowing me to change those transformations without modifying the app. You can also use dynamic DataWeave, where you use a template system to load specific parts of the script before running it. I've even taken it a step further and written a generic DataWeave script that I can use to do basic mappings without writing DataWeave - this allowed me to wrap a web UI around things pretty easily.
I wouldn't use DataWeave outside of MuleSoft unless you're a MuleSoft shop. If you are a MuleSoft shop, using the CLI to run your scripts, the same way you do with most interpreted languages, works fairly nicely - especially since you likely already have in-house expertise in DataWeave. The language is still niche enough that unless you've already adopted it for use in Mule applications I don't see any advantage in using it.
Docs / basic examples:
https://github.com/mulesoft-labs/data-weave-native
https://docs.mulesoft.com/mule-runtime/4.3/parse-template-reference
https://docs.mulesoft.com/mule-runtime/4.3/dataweave-create-module
https://github.com/mikeacjones/transform-system-api
Because it is the expression and transformation language embedded in Mule runtime. If you are using Mule it is also integrated with the IDE Anypoint Studio.
Outside Mule applications I don't think you can use DataWeave easily. You might want to go with the alternatives.

Update design workflow in Talend at runtime

I am very much new in Talend ETL tool.
I have a very basic question: Can I update the design workflow and transformation in Talend ETL tool at runtime?
I mean suppose my application is running in a server. Now I want to change the design workflow of the running application so the application will be updated to new design workflow at runtime. Similary I want to change the transformation logic at runtime. I think MuleSoft provides this provision.
Please need your help. Thanks in advance.
As #Jim Macaulay said in a comment, it depends on what you want to change.
Is it the columns that a row contains ?
Then you might need Dynamic Schema which is a paid feature (or use different flows, see next part).
Is it simply to alternate between 2 distinct datasources (or X
different flows) based on external stimulii ?
Then you could use the If trigger with context variables to use one or the other.

Convert Resuable ErrorHandling flow in to connector/component in Mule4

I'm Using Mule 4.2.2 Runtime. We use the errorHandling generated by APIKIT and we customized it according to customer requirement's, which is quite standard across all the upcoming api's.
Thinking to convert this as a connector so that it will appear as component/connector in palette to reuse across all the api's instead copy paste everytime.
Like RestConnect for API specification which will automatically convert in to connector as soon as published in Exchange ( https://help.mulesoft.com/s/article/How-to-generate-a-connector-for-a-REST-API-for-Mule-3-x-and-4-x).
Do we have any option like above publishing mule common flow which will convert to component/connector?
If not, which one is the best way suits in my scenario
1) using SDK
https://dzone.com/articles/mulesoft-custom-connector-using-mule-sdk-for-mule (or)
2) creating jar as mentioned in this page
[https://www.linkedin.com/pulse/flow-reusability-mule-4-nagaraju-kshathriya][2]
Please suggest which one is best and easy way in this case? Thanks in advance.
Using the Mule SDK (1) is useful to create a connector or module in Java. Your questions wasn't fully clear about what do want to encapsulate in a connector. I understand that you want is to share parts of a flow as a connector in the palette, which is different. The XML SDK seems to be more inline with that. You will need to make some changes to encapsulate the flow elements, as described in the documentation. That's actually very similar to how REST connect works.
The method described in (2) is for importing XML flows from a JAR file, but the method described by that link is actually incorrect for Mule 4. The right way to implement sharing flows through a library is the one described at https://help.mulesoft.com/s/article/How-to-add-a-call-to-an-external-flow-in-Mule-4. Note that this method doesn't create a connector that can be used from Anypoint Studio palette.
From personal experience - use common flow, put it to repository and include it as dependency to pom file. Even better solution - include is as flow to the Domain app and use it alone with your shared https connector.
I wrote a lot of Java based custom components. I liked them a lot and was proud of them. But transition from Mule3 to Mule4 killed most of them. Even in Mule4 Mulesoft makes changes periodically which make components incompatible with runtime.

Read CDPOS/CDHDR Sap tables using vbscript

I am trying to read sap change log using rfc + vbs as buffer.
I know that I need to use CHANGEDOCUMENT_READ_HEADERS and CHANGEDOCUMENT_READ_POSITIONS functions to do this, but have not found any research how to do this properly with the help of vbscript.
I have already found out how to read normal tables using BBP_RFC_READ_TABLE but it doesn’t work with CDPOS...
Any ideas?
First, if you want to use VBscript to integrate with SAP, you will go through RFC channel using SAP NWRFC library or SAP .Net Connector 3.0, and the SAP functions or BAPIs you can call must be remote enabled. Unfortunately, the two functions, CHANGEDOCUMENT_READ_HEADERS and CHANGEDOCUMENT_READ_POSITIONS, are not remote enabled. I could imagine CDPOS is difficult for you because CDPOS has wide fields which cannot be processed by BBP_RFC_READ_TABLE.
Once we are aligned with the objective challenges, there are two options to help you move forward:
Write your own custom "Z" function module, which is remote enabled, and call CHANGEDOCUMENT_READ_HEADERS and CHANGEDOCUMENT_READ_POSITIONS inside the function;
Use third party commercial library (our company AecorSoft developed such ADO.NET compliant library for SAP integration).
I would suggest you explore #1 first. You can follow this blog https://blogs.sap.com/2017/02/09/how-to-use-dotnet-connector-nco-inside-vba/ to get started.
Dunno about BBP_RFC_READ_TABLE but RFC_READ_TABLE perfectly reads CDPOS
If you need to header-based query you will need 2 sequential reads: first for CDHDR headers and the second for positions, constructing second query from the first.

Using Specflow to drive outside in development on .NET MVC 3 based projects

I want to do ATDD with TDD and DDD and I want to first discover behaviors (using mocks) of a domain model (ecommerce in my example).
You can imagine that in DDD layering we can have application services calling domain services and repositories or other services and non business logic code, only tasks related to the application)
Please use the text below that I am trying to understand:
HOW TO USE MOCKS TO DISCOVER BEHAVIOUR OF MY ECOMMERCE DOMAIN AND THEN ENTER MORE GRANULAR TDD DEVELOPMENT TO IMPLEMENT DESIRED BEHAVIOUR.
This is an excerpt from another question (as an answer).
BDD, what's a feature?
"Pick whatever task that you need to implement, open a blank text file and try to explain using simple sentences the behavior. Every sentence should start with one of three keywords: given, when and then. Using your favorite BDD framework write the code that will parse these sentences and stimulate the application to get into the start state (given), execute some commands (when) and assert the transitioned state (then). Application code may start from mere mocks. Replace gradually those mocks with gradually built code and grow your application with higher confidence and quality levels."
Can someone provide some concrete examples of starting with mocks (RhinoMock, Moq) using two approaches:
1.Driving ATDD via Controller's actions and
2.Using Watin Driver (Page Objects, WatiN MVCContrib extensions) or Selenium.
If I am using no. 2. will I be able to see some example data when I visit some pages myself and do some actions ("When" I do something: navigate, post data) and validate results of these actions.
To fully understand the nature of my question please read this:
http://jockeholm.wordpress.com/2010/02/14/combining-tddbdd-with-ddd/
Especially Steps 3. and 4.
I will privide the text for step 3:
3.[BDD/ATDD] For each test scenario, implement an executable example that fails, since that behaviour is not supported by the system. Then, use outside-in development, with an extensive use of mock objects, to flesh out the behavior specified in the executable example.
Thanks,
Rad
This may help:
http://msdn.microsoft.com/en-us/magazine/dd882516.aspx

Resources