I am working on a flow that will, when triggered by an HTTP request, download files from an FTP server. In order to do this on request, instead of on polling, I am using the Mule Requester.
I have found that without the requestor, FTP connector will set the "incomingFilename" on the inboundProperties collection for each of the files. When used with the Mule Requester, the filename property is not set, therefore I have no idea what file I am processing... or in this case saving to the file system. In the code below I am using the 'counter' variable for thefilename in the case the the filename doesn't come through.
Any idea how to fix this issue? Here is the flow below:
<ftp:connector name="FTPConfig" pollingFrequency="3000" validateConnections="true" doc:name="FTP"></ftp:connector>
<flow name="FileRequestFlow">
<http:listener config-ref="HTTP_Listener_Configuration" path="csvfilesready" allowedMethods="GET" doc:name="HTTP"></http:listener>
<mulerequester:request-collection config-ref="Mule_Requester" resource="ftp://username:pswd#127.0.0.1:21/Incoming?connector=FTPConfig" doc:name="Mule Requester"></mulerequester:request-collection>
<foreach collection="#[payload]" doc:name="For Each">
<set-variable variableName="thefilename" value="#[message.inboundProperties.originalFilename==null ? counter.toString()+'.csv' : message.inboundProperties.originalFilename] " doc:name="Variable"/>
<file:outbound-endpoint path="/IncomingComplete" outputPattern="#[flowVars.thefilename]" responseTimeout="10000" doc:name="File"></file:outbound-endpoint>
<logger message="#['File saved as: ' + payload]" level="INFO" doc:name="Logger"></logger>
</foreach>
<logger message="#[payload]" level="INFO" doc:name="Logger"></logger>
</flow>
UPDATE:
Below is an option for working with the requester, however, you can use the request-collection. The key is to realize that it will return a MuleMessageCollection and to use the Collection Splitter directly after the Requester, which will then returning individually the ftp file messages with the originalFilename.
After playing with this a while, I have found that with FTP in the mule requester you can get the filename only if you use it as a request, not request-collection.
Have not been able to get the request-collection to work if you need filenames associated.
So.... If you need multiple files, you need do something like loop on the requester until the payload is null.
If you have alternate methods please let me know.
I have written an alternate ftp-connector, it allows to issue a list-command in the flow, followed by a loop to read the files.
See: https://github.com/rbutenuth/ftp-client-connector/
Related
I'm trying to manually translate a program from Mule 3 to Mule 4, and a lot of the transforms have something like
<dw:input-variable doc:sample="sample_data\json_63.json" variableName="dsRespPayloads"/>
I don't know what the equivalent is in Mule 4 or if there is one. This is leading me to a problem where a flow defines a variable calls another flow, and in that second flow it tries to transform the message using the variable defined in the first.
In Mule 4 it keeps saying Property: dsRespPayloads was not found.
and it's giving me errors over it. Also the tree on the left just says Unknown for Payload and Attributes
Any help or explanation about what's going on would be appreciated.
Are you saying you define a variable in flowOne and in the flowTwo, you calls this variable
something like that ???
<flow name="flow1" doc:id="29ba6da8-7458-4f35-adff-1d9db5738fbc" >
<set-variable value="Hello" doc:name="Set Variable" doc:id="055abd42-b240-4113-b08d-dde00b8ea590" variableName="dsRespPayloads"/>
</flow>
<flow name="dataweaveLabFlowTwo" doc:id="ae122a1a-b9d0-490d-a7e4-2c138e5d4c01" >
<logger level="INFO" doc:name="Logger" doc:id="e4f4bf3a-29f9-4d4e-b2c9-d4a86bd2eb29" message='#[" $(vars.dsRespPayloads)"]' />
</flow>
In Mule 4 you can not set the mime types for inputs at the transformer level. You need to set them in the connector where they are generated, in set-payload or set-variable, before they reach the transformer.
Example:
<set-variable variableName="x" value='{"a":"b", "c":1}' mimeType="application/json" doc:name="Set Variable" />
We have web application and want to use ftp:inbound-channel-adapter to read the files from ftp & create a message & send it to a channel
When local-directory is set to local-directory="#servletContext.getRealPath('')}/temp/inFtp" it gives
Message generated is
GenericMessage [payload={applicationDeployPath}\temp\inFtp\xyz.stagging.dat, headers={timestamp=1444812027516, id=9b5f1581-bfd2-6690-d4ea-e7398e8ecbd6}]
But directory is not created, i checked i have full permission here.
The path [{applicationDeployPath}\temp\inFtp] does not denote a properly accessible directory.
In Message i want to send some more fields as payload having value from property file as per environment, How can I do that?
<int-ftp:inbound-channel-adapter id="ftpInboundAdaptor"
channel="ftpInboundChannel"
session-factory="ftpClientFactory"
filename-pattern="*.stagging.dat"
delete-remote-files="false"
remote-directory="/stagging/"
local-directory="#servletContext.getRealPath('')}/temp/inFtp"
auto-create-local-directory="true">
<int:poller fixed-rate="1000"/>
</int-ftp:inbound-channel-adapter>
thanks in advance
The local-directory is only evaluated once (and created if necessary), at initialization time, not for every message.
I am surprised the context even initializes; this syntax looks bad:
local-directory="#servletContext.getRealPath('')}/temp/inFtp"
If you mean
local-directory="#{servletContext.getRealPath('')}/temp/inFtp"
it would only work if you have some bean called servletContext.
If you mean
local-directory="#servletContext.getRealPath('')/temp/inFtp"
you would need a variable servletContext in the evaluation context.
it gives Message generated is GenericMessage [payload={applicationDeployPath}\temp\inFtp\xyz.stagging.dat, headers={timestamp=1444812027516, id=9b5f1581-bfd2-6690-d4ea-e7398e8ecbd6}]
If it's actually generating that message, it must have created that directory.
It's not clear what you mean by
I want to send some more fields as payload
The payload is a File.
If you want to change the payload to something else containing the file and other fields, use a transformer.
This seems like an obvious request to me so I'm hoping others may have already solved this.
I have app jboss logs with lots & lots of errors. In order to manage and address these I'd like to figure out a way to track them. after looking at
How to retrieve unique count of a field using Kibana + Elastic Search
I'm thinking I can use a similar approach.
per es docs, it looks like facets have been replaced so I'm thinking I should dig into sum aggregation but not sure yet.
I'm still not sure of best way to further break down my jboss log records. the field I'm most interested in is message field which has date/time stamp, hostname in front of each record. what's the best approach to tackle this? break the message field down further--ignore first 2 elements then sort & count next section of this field? I may need to ignore some of the end of this record as well but will deal with that next...
I'm pretty new to ELK stack but excited about its possibilities.
Thx.
Joe
Logstash (part of E L K) comes up with a lots of filtering option. Most useful is Grok. It is best suited to parse the field from a long message in {key,value} pair.
Also, you can delete/ignore the particuler data from the message in Logstash through different kinds of plugins avliable.You can explore it in http://logstash.net/docs/1.4.2/.
After you send those data Elastic, you can use the power of Kibana to create a dashboard based on your requirment.
Hence, ELK is perfectly suites for the requirement you have.
The best and easiest way to get your JBOSS output into ELK is through a socket connector. There are lots of tutorials but it will automatically give you your message breakdown for free.
See this for an example: http://blog.akquinet.de/2015/08/24/logstash-jboss-eap/
Please note that personally I have had to change the appenders and use documentation to get the correct fields. If you are using 2.0 elasticsearch than update the configuration. For simple debugging simple output to stdout.
Once you have the socket appenders working correctly you are laughing and go to kibanan, configure the dashboard with whatever aggregation you would like. I would not recommend breaking it down further as then you have a custom message breakdown that will not apply to a jboss implementation, feel free to add additional value/pairs such as appname.. etc.
SAMPLE:
* jboss-eap-6.4.0.0
* elasticsearch-2.0.0-beta2
* kibana-4.2.0-beta2-windows
* logstash-2.0.0-beta1
Create a file called log4j.conf under logstash/conf dir, i.e. "C:_apps\logstash-2.0.0-beta1\conf\log4j.conf" with the below content.
input {
log4j {
mode => "server"
host => "0.0.0.0"
port => 4712
type => "log4j"
}
}
output {
elasticsearch {
hosts => "127.0.0.1"
#cluster => "myAppName"
index => "logstash-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}
Run Logstash with the following command prompt within dir:
bin\logstash.bat -f conf\log4j.conf
Configuring Appenders:
JBOSS APPENDER
Within the profile:
<custom-handler name="Remotelog4j" class="org.apache.log4j.net.SocketAppender" module="org.apache.log4j">
<level name="INFO"/>
<properties>
<property name="RemoteHost" value="localhost"/>
<property name="Port" value="4712"/>
<!--property name="BufferSize" value="1000"/-->
<!--property name="Blocking" value="false"/-->
</properties>
</custom-handler>
within the root loggger configuration define your handlers:
<root-logger>
<level name="INFO"/>
<handlers>
<handler name="CONSOLE"/>
<handler name="FILE"/>
<handler name="Remotelog4j"/>
</handlers>
</root-logger>
Start JBOSS, note that your command prompt is printing out all the incoming messages from your standalone JBOSS instance.
Configuring Another Application with OLD Log4J
Log4J version log4j-1.2.15.jar
Inside the packaged WAR I created this simple additional log4j appender:
<appender name="log4jSocket" class="org.apache.log4j.net.SocketAppender" module="org.apache.log4j">
<level name="ERROR"/>
<param name="RemoteHost" value="localhost"/>
<param name="Port" value="4712"/>
<param name="threshold" value="ERROR" />
</appender>
Again, add the appender to your application log4j loggers.
<logger name="com.somepackage" additivity="false">
<level value="error"/>
<appender-ref ref="default"/>
<appender-ref ref="event"/>
<appender-ref ref="log4jSocket"/>
</logger>
Now restart your jboss configuration and deploy/start your application inside JBOSS. You will get both jboss output and application output inside of logstash value/paired nicely.
I'm trying to force Mule ESB Studio, to perform some simple insert into a jdbc. My goal is to open a page in my webbrowser, let's say it's http://localhost:8081/, and then Mule ESB performs insert 'foobar' value into database.
Here's my code:
<jdbc-ee:mssql-data-source name="MS_SQL_Data_Source" user="esbtest" password="Test123" transactionIsolation="NONE" doc:name="MS SQL Data Source" loginTimeout="10000"
url="jdbc:sqlserver://10.1.212.42:1433;databaseName=test"></jdbc-ee:mssql-data-source>
<jdbc-ee:connector name="Database" dataSource-ref="MS_SQL_Data_Source" validateConnections="true" queryTimeout="-1" pollingFrequency="0" doc:name="Database"/>
<flow name="Test_PierwszyFlow1" doc:name="Test_PierwszyFlow1">
<http:inbound-endpoint exchange-pattern="request-response" host="localhost" port="8081" doc:name="HTTP" mimeType="text/plain"></http:inbound-endpoint>
<object-to-string-transformer doc:name="Object to String"/>
<jdbc-ee:outbound-endpoint exchange-pattern="one-way" queryTimeout="-1" doc:name="Database" connector-ref="Database" queryKey="insertQuery">
<jdbc-ee:query key="insertQuery" value="INSERT INTO t_Login (login) VALUES ('foo bar')"/>
</jdbc-ee:outbound-endpoint>
</flow>
I've not specified any beans or such things. In logs, I see these lines:
INFO 2013-07-02 11:20:37,550 [[test_pierwszy].connector.http.mule.default.receiver.02] org.mule.lifecycle.AbstractLifecycleManager: Initialising: 'Database.dispatcher.1938839184'. Object is: EEJdbcMessageDispatcher
INFO 2013-07-02 11:20:37,550 [[test_pierwszy].connector.http.mule.default.receiver.02] org.mule.lifecycle.AbstractLifecycleManager: Starting: 'Database.dispatcher.1938839184'. Object is: EEJdbcMessageDispatcher
INFO 2013-07-02 11:20:37,670 [[test_pierwszy].connector.http.mule.default.receiver.02] com.mulesoft.mule.transport.jdbc.sqlstrategy.UpdateSqlStatementStrategy: Executing SQL statement: 1 row(s) updated
INFO 2013-07-02 11:20:37,730 [[test_pierwszy].connector.http.mule.default.receiver.02] com.mulesoft.mule.transport.jdbc.sqlstrategy.UpdateSqlStatementStrategy: Executing SQL statement: 1 row(s) updated
...but my database is empty! I must say, I am totally new in Mule ESB area and have no idea what is wrong. Please - help.
Edit: Funny thing is, that when I change talbe or column name to something, that does not exist, I get JDBC error corresponding to that matter.
Second question is, how to inject to DB value I've specified in URL? For example, when I type in browser http://localhost:8081/foo, the value foo is passed to jdbc outcome connector, and 'foo' value is inserted.
Thanks in advance.
I'm guessing uncommitted transaction here. Try adding transactionPerMessage="true" to your <jdbc-ee:connector...
I noticed similar issue in Mule-EE, Mule-CE works fine...
When you call jdbc:outbound-endpoint with an ArrayList payload but use SESSION/INVOCATION variables as params in your SQL query, I noticed it has no effect on the table in question (no Insert/Update) yet logger records "Record Updated".
Changing payload to other type of an Object for this jdbc call solved the issue.
To keep an open mind, consider:
-passing SQL query parameters via a Payload instead
-consider message enrichers
How can this be done? It works fine with one int-file:outbound-channel-adapter, but I could not make it work when I add another one. I actually added another, separate set of channel/adapter but it still did not work.
In int-file:outbound-channel-adapter tag, there is actually a "directory" attribute, but it only accepts a single directory path.
Here is the code I have tried:
<int-file:outbound-channel-adapter id="outputDirectory1"
directory="${output.directory1}"
channel="fileWriterChannel1"
filename-generator- expression="headers.get('filename')"
delete-source-files="true"/>
<int-file:outbound-channel-adapter id="outputDirectory2"
directory="${output.directory2}"
channel="fileWriterChannel2"
filename-generator-expression="headers.get('filename')"
delete-source-files="true"/>
Below are the channels, while the bean is the actual writer. Note that the two channels both refer to the bean (ref="messageTransformer"):
<int:transformer id="messageToStringTransformer1"
input-channel="messageTypeChannel"
output-channel="fileWriterChannel1"
ref="messageTransformer"
method="write"/>
<int:transformer id="messageToStringTransformer2"
input-channel="messageTypeChannel"
output-channel="fileWriterChannel2"
ref="messageTransformer"
method="write"/>
<bean id="messageTransformer" class="com.message.writer.DefaultMessageWriter"/>
If I do understand you correctly, do you want to write a Message payload to a collection of directories simultaneously? In order to have multiple file adapters listen to the same channel, you have to use a Publish Subscribe Channel using the element. For more information, please see: http://static.springsource.org/spring-integration/reference/html/messaging-channels-section.html#channel-configuration-pubsubchannel
When using a File Outbound Channel Adapter, you can also use the directory-expression attribute which is available since Spring Integration 2.2. It gives you full SpEL expression support. Thus, the directory you want to write to, can be for example a provided message header. For more information, please see:
http://static.springsource.org/spring-integration/reference/html/files.html#file-writing-output-directory