Use multiple queue for getTransactions on ReFramework - uipath

I am beginner in UIPath.
What I have :
File:
c:/data/A/a.xlsx
c:/data/B/b.xlsx
c:/data/C/c.xlsx
Queue name :
A-queue
B-queue
C-queue
What I want :
For each file, select the file and process.
It’s work great for A.xlsx. But it’s doesn’t go for b, and c :frowning:
Q1 : How to process all the files with differents queues ?
What I have done :
I have built my array tab_queue_name where all queue name is here in GetTransactionData.xml.
Advance thanks,

Related

Jmxtrans Config for multiple queries to the same Writer

While setting up metric reporting for Apache Kafka to ElasticSearch with jmxtrans, we have written a configuration file that queries about 50 metrics.
The queries are as follows:
{
"obj" : "kafka.server:type=BrokerTopicMetrics,name=TotalFetchRequestsPerSec",
"outputWriters" : [ {
"#class" : "com.googlecode.jmxtrans.model.output.elastic.ElasticWriter",
"connectionUrl": "http://elasticHost:9200"
}]
}
Since there are so many of them all writing to the same destination, is there a way in the config file to shorten this?
Any help is highly appreciated.
You can try to be more precise in your MBean path -
kafka.server:name=TotalFetchRequestsPerSec,topic=MyCoolTopic,type=BrokerTopicMetrics
Take a look on this one as a great example - jmxtrans supports resultAlias as well.
Here you can find a list of Kafka MBeans which could become handy for you.

Sonarqube - what is rule key?

I am trying to change my Jenkins jobs regarding Sonarqube settings. So I opened my Jenkins job configuration, I am seeing something like this
sonar.issue.ignore.multicriteria=e1,e2,e3,e4,e5
sonar.issue.ignore.multicriteria.e1.ruleKey=squid:S00112
sonar.issue.ignore.multicriteria.e1.resourceKey=**/*.java
I am searching for ruleKey "squid:S00112" in Sonarqube documentation, but I am not able to find any reference regarding that.
I need to add a few more rules to ignore. But I am unable to identify the rules rule-key values (like ruleKey=squid:S00112).
On a SonarQube server, rule key is displayed on the top right corner of the rule description. For example, you can look for squid:S109 in this rule description
SonarQube rule key is composed of repository id : rule id
repository id
Each language analyser create several rule repositories with ids that usually contain the language name, except for the java analyser that oddly use "squid".
For example, this is the list of repository keys existing on sonarcloud.io (source)
LANGUAGE_ID : REPOSITORY_KEY_LIST
abap : abap, common-abap
c : c, common-c
cpp : cpp, common-cpp
cs : csharpsquid, common-cs
css : css, common-css, external_stylelint
flex : flex, common-flex
go : go, common-go, external_golint, external_govet
java : squid, common-java, external_checkstyle, external_findsecbugs, external_pmd, external_spotbugs
js : javascript, common-js, external_eslint_repo
kotlin : kotlin, common-kotlin, external_android-lint, external_detekt
objc : objc, common-objc
php : php, common-php
plsql : plsql, common-plsql
py : python, common-py, Pylint
ruby : ruby, common-ruby, external_rubocop
swift : swift, common-swift, external_swiftlint
ts : typescript, common-ts, external_tslint
tsql : tsql, common-tsql
vbnet : vbnet, common-vbnet
web : Web, common-web
xml : xml, common-xml
rule id
Former rules could have a Pascal Case id like "NoSonar", but now, majority of rules have an id stating by 'S' following by the jira number of the rule from this repository jira.sonarsource.com/browse/RSPEC/
For example, rule id S109 matches with RSPEC-109
Note: rules.sonarsource.com/ also use the RSPEC-109 format in the URL, you could easily convert it to S109.
You can find your ruleKey for specific rule inside SonarQube server.
Steps:
Rules Tab -> Select Specific Rule -> Right top is your RuleKey
In this example, the ruleKey is Web:TableWithoutCaptionCheck
Screenshot:

Camel download files

I have maybe a stupid question but how I can to download files from ftp server.
I use the route
.from("ftp:/test#localhost:21/?password=test") .to("file:/d:\\test")
I have the error : can not store null body. Why ? I read several examples Where is my error? Thanks
EDIT
I use the route :
.from("direct:xx")
.from("ftp://test#localhost:21/?password=test")
.to("file://d:\inbox");
And I have the error :
org.apache.camel.component.file.GenericFileOperationFailedException: Cannot write null body to file: d:\inbox\xxxxxxx
at org.apache.camel.component.file.FileOperations.storeFile(FileOperations.java:237)
at org.apache.camel.component.file.GenericFileProducer.writeFile(GenericFileProducer.java:277)
at org.apache.camel.component.file.GenericFileProducer.processExchange(GenericFileProducer.java:165)
at org.apache.camel.component.file.GenericFileProducer.process(GenericFileProducer.java:79)
This should work.
.from("ftp://test#localhost:21/?password=test").to("file://d:\\test")
I am pretty sure about the from part. But the to part you might have to change a little bit(wrt '/') because I have not worked on windows
Add parameter 'allowNullBody=true' to your TO file endpoint
to("file://d:\test?allowNullBody=true")
If you go deep into source code as state in org.apache.camel.component.file.FileOperations.storeFile(FileOperations.java:237), you will see the related GenericFileOperationFailedException happens when
In Body of exchange is null
allowNullBody is set to false
By default, allowNullBody in File producer is 'false' as state in Camel File Component. You need to change it to 'true' to allow storage of empty files.

ExecutionScript output two different flowfiles NIFI

I'm using executionScript with python and I'm having a dataset which it may have some corrupted data, my idea is to process the good data, and put it in my flowfile content to my success relationship and the corrupted one redirect them in the failure relationship, I have done something like this :
for msg in messages :
try :
id = msg['id']
timestamp = msg['time']
value_encoded = msg['data']
hexFrameType = '0x'+value_encoded[0:2]
matches = re.match(regex,value_encoded)
....
except:
error_catched.append(msg)
pass
any idea how can I do that ?
For the purposes of this answer I am assuming you have an incoming flow file called "flowFile" which you obtained from session.get(). If you simply want to inspect the contents of flowFile and then route it to success or failure based on an error occurring, then in your success path you can use:
session.transfer(flowFile, REL_SUCCESS)
And in your error path you can do:
session.transfer(flowFile, REL_FAILURE)
If instead you want new files (perhaps one containing a single "msg" in your loop above) you can use:
outputFlowFile = session.create(flowFile)
to create a new flow file using the input flow file as a parent. If you want to write to the new flow file, you can use the PyStreamCallback technique described in my blog post.
If you create a new flow file, be sure to transfer the latest version of it to REL_SUCCESS or REL_FAILURE using the session.transfer() calls described above (but with outputFlowFile rather than flowFile). Also you'll need to remove your incoming flow file (since you have created child flow files from it and transferred those instead). For this you can use:
session.remove(flowFile)

Flume: HDFSEventSink - how to multiplex dynamically?

Summary: I have a multiplexing scenario, and would like to know how to multiplex dynamically - not based on a value statically configured, but based on the variable value of a field(e.g. dates).
Details:
I have an input, that is separated by an entityId.
As I know the entities that I am working with, I can configure it in typical Flume multi-channel selection.
agent.sources.jmsSource.channels = chan-10 chan-11 # ...
agent.sources.jmsSource.selector.type = multiplexing
agent.sources.jmsSource.selector.header = EntityId
agent.sources.jmsSource.selector.mapping.10 = chan-10
agent.sources.jmsSource.selector.mapping.11 = chan-11
# ...
Each of the channels goes to a separate HDFSEventSink, "hdfsSink-n":
agent.sinks.hdfsSink-10.channel = chan-10
agent.sinks.hdfsSink-10.hdfs.path = hdfs://some/path/
agent.sinks.hdfsSink-10.hdfs.filePrefix = entity10
# ...
agent.sinks.hdfsSink-11.channel = chan-11
agent.sinks.hdfsSink-11.hdfs.path = hdfs://some/path/
agent.sinks.hdfsSink-11.hdfs.filePrefix = entity11
# ...
This generates a file per entity, which is fine.
Now I want to introduce a second variable, which is dynamic: a date. Depending on event date, I want to create files per-entity per-date.
Date is a dynamic value, so I cannot preconfigure a number of sinks so each one sends to a separate file. Also, you can only specify one HDFS output per Sink.
So, it's like a "Multiple Outputs HDFSEventSink" was needed (in a similar way as Hadoop's MultipleOutputs library). Is there such a functionality in Flume?
If not, is there any elegant way to fix this or work this around? Another option is to modify HDFSEventSink and it seems it could be implemented, by having a different creation of "realName" (String) for each event.
Actually you can specific the variable in you hdfs sink's path or filePrefix.
For example, if the variable's key is "date" in event's headers, then you can configure like this:
agent.sinks.hdfsSink-11.hdfs.filePrefix = entity11-%{date}

Resources