I'm Using Mule 4.2.2 Runtime. We use the errorHandling generated by APIKIT and we customized it according to customer requirement's, which is quite standard across all the upcoming api's.
Thinking to convert this as a connector so that it will appear as component/connector in palette to reuse across all the api's instead copy paste everytime.
Like RestConnect for API specification which will automatically convert in to connector as soon as published in Exchange ( https://help.mulesoft.com/s/article/How-to-generate-a-connector-for-a-REST-API-for-Mule-3-x-and-4-x).
Do we have any option like above publishing mule common flow which will convert to component/connector?
If not, which one is the best way suits in my scenario
1) using SDK
https://dzone.com/articles/mulesoft-custom-connector-using-mule-sdk-for-mule (or)
2) creating jar as mentioned in this page
[https://www.linkedin.com/pulse/flow-reusability-mule-4-nagaraju-kshathriya][2]
Please suggest which one is best and easy way in this case? Thanks in advance.
Using the Mule SDK (1) is useful to create a connector or module in Java. Your questions wasn't fully clear about what do want to encapsulate in a connector. I understand that you want is to share parts of a flow as a connector in the palette, which is different. The XML SDK seems to be more inline with that. You will need to make some changes to encapsulate the flow elements, as described in the documentation. That's actually very similar to how REST connect works.
The method described in (2) is for importing XML flows from a JAR file, but the method described by that link is actually incorrect for Mule 4. The right way to implement sharing flows through a library is the one described at https://help.mulesoft.com/s/article/How-to-add-a-call-to-an-external-flow-in-Mule-4. Note that this method doesn't create a connector that can be used from Anypoint Studio palette.
From personal experience - use common flow, put it to repository and include it as dependency to pom file. Even better solution - include is as flow to the Domain app and use it alone with your shared https connector.
I wrote a lot of Java based custom components. I liked them a lot and was proud of them. But transition from Mule3 to Mule4 killed most of them. Even in Mule4 Mulesoft makes changes periodically which make components incompatible with runtime.
We have existing Xquery files in Oracle Service Bus projects.If we want to resuse the xquery files in IIB projects which saves lot of development effort as there are so many xquery files already available in different projects. So we have created a POC using Saxon processor library to execute the Xquery files in Java compute node by passing Input message available in MbMessageAssembly to XQPreparedExpression which is working fine and getting the response. Here my question is , does it have any performance impact if we follow this approach. Could you please suggest if it causes any issue or it is okay to follow this approach.
I've never used Xquery, so my answer might not be fully accurate, but there are some recommendation by IBM on how to use external classes like JAXB inside IIB, so they are not especially against such scenario.
Of course, it will most likely have worst performances than the same code written in optimized ESQL, but I think in your case it should be relevant to perform a Proof Of Concept. Just compare the average execution time on Oracle and compare it to IIB one, and then you should be able to decide if you go with that solution or if you need to rewrite everything.
I'm new to Marklogic and mlcp. I'm working on marklogin 9.0-8. I wnat to use mlcp to load content, but since some parameters may need to be dynamically built based on content, does anyone know if it is possible to call mlcp from java application?
Thanks a lot,
Helen
MarkLogic provides two Java-based ways to load content: MLCP and DMSDK. MLCP is intended to be used as a command-line tool (and I believe that's the only supported use).
The Data Movement SDK, on the other hand, is specifically intended to offer very similar functionality in the form of a JAR, making it easy to access from a Java application. I encourage you to look into using that instead.
tutorial
JavaDoc
Asynchronous Multi-Document Operations
12-minute video intro to DMSDK
common tasks made easier through ml-gradle
What is the best end-user language for data transformation (ETL), ideally something that can run in a JVM (with a LGPL like license).
With end-user meaning users who are IT related but might not be experts in OO or like.
You can try CloverETL. It is Java based ETL tool. You can either define transformation through its visual designer or handcode directly in XML files that are used to store data about transformations.
It also uses its own transformation language - CTL that is used inside transformation components. You do not need to be expert in OOP to use it.
Rust is the best language. In fact, companies can now evolve from ETL to STL or Stream, Transform and Load with a technology like InfinyOn Cloud. InfinyOn Cloud is a fully managed Fluvio service. This solution brief explains how the Financial Services industry is leveraging this tech:
https://www.infinyon.com/resources/real-time-data-trasformation/
What parts of your application are not coded?
I think one of the most obvious examples would be DB credentials - it's considered bad to have them hard coded. And in most of situations it is easy to decide if you want something to be externalized or coded.For me the rules are simple. Some part of the application should be externalized if:
it can and should be changed by non-developer, but not so often to be included in application settings defined in UI (DB credentials, service URLs, etc)
it does not require programming language and seems unnatural being coded (localization)
Do you have anything to add?
This is a little related to this question about spring cfg.
Spring configuration seems less obvious example for me, because in my practice it is never modified by anyone except the developer. And the road of externalizing can take you far away, to the entire project being "configured", not coded - so where to stop?
So please post here some examples from your experience, when you got benefit from having something configured, not coded - like dependency injection configuration in spring, etc.
And if you use spring - how often is configuration changed without recompiling?
Anything that needs to differ between different deployments of your application. That is, anything specific to the environment.
Examples include:
Database connection strings
URLs for web or WCF services
Logging configuration
Any information your application uses that is "data" and that could change depending on where it is installed. Things like:
smtp mail server used to send e-mails
Database connect strings
Paths to file locations / folders used by the app
FTP servers & connect info
Active Directory servers used for authentication
Any links displayed in the application to external information
sources
Warning limit values
I've even put the RegEx filters used to limit the allowable characters
for data entry fields.
Besides the obvious changing stuff (paths, servers, ports, and so on), some people argue that you should be able to easily change whatever might reasonably change, for instance, say you have a generic engine which operates on the business logic (a rule engine).
You would then define the rules on a "configuration file" which ends up being is no less than programming in a DSL instead of in the generic purpose language. Benefits being it's closer to the domain so it's easier and more maintainable, and that you can easily change things that otherwise would demand a new build.
The main argument behind this is that things you assumed would never change always end up changing nonetheless, so you better be prepared.
paths and server names/addresses come to mind..
I agree with your two conditions, which is why I:
Rarely include a config file as part of a Windows or Windows Mobile application (web apps yes).
If I did include a config file meant to be tweaked by end users, it certainly wouldn't be XML.
Employee emails/names since employees can come and go... (you should typically try to keep them out of an application though)
Configuration files should include:
deployment details
DB credentials
file paths
host names
anything that is used in many places but that may change
contact email addresses
options that aren't in the GUI
The last one is a bit open-ended, but very important. I've found it very useful to foresee variables that the client may, in the future, want to change. If changes are infrequent, I or they can edit the config file. If it becomes a frequent thing, it's trivial to add the option to the GUI, which isn't hardcoded.
I would also add encryption keys (which themselves should be encrypted)...
Basically the rule of thumb is information the application needs BEFORE it's regular, functional operation, data that it MUST have on-hand (i.e. local and not networked).
Note that this data should not be dynamically changing or large amounts of it, otherwise it should be in the database.
With Spring apps I actually distinguish between two types of configuration:
Items externalized into property files which are "deploy time" concerns or "environment-specific": server IP's / addresses, file system locations, etc etc
Spring XML configuration which can do lots of things, like indicate the overall application structure, apply behavior via AOP, etc.
I use Spring to wire all the beans in a J2SE application that has no GUI (a transactional switch). That way it's very easy for me to have different configurations in each deployment (we have this thing running in different countries), without having to code anything different.
Another thing I like to have is to manage all the SQL statements separately from the code, when I use plain JDBC (or Spring JDBC). Like in a properties file or XML or something, sometimes even as String properties in the beans that will use the statement (when there is only one bean that will use the statement, such as a DAO).
I am going to use spring JDBC or vanilla JDBC for data persistence, here we have decided to externalize all the SQL from the Java code, so can be better mangable in terms of SQL query tuning and optimization, we don't need to disturb the java code.