When I create a local xml file I get an error importing carrot2 - carrot2

I am trying to create a local xml file for import. The tags specified in https://doc.carrot2.org/#figure.input-xml-format give me an error. Specifically, I get the error:
"Failed to read attributes from:
/lungo/home/holz/nestlib/extras/text/carrot2/goodpubmed.xml Element
'query' does not have a match in class
org.carrot2.util.attribute.AttributeValueSets at line 2".
If I remove query, I get the error with 'document' Element. I have just downloaded the latest version for linux with java 1.8.

You're probably trying to load your XML file to the attribute view rather than process it through the clustering algorithm. Here's how you can pass your XML file as data for clustering: http://doc.carrot2.org/#section.getting-started.xml-files.

Related

upload multiple Test cases at one time in ALM?

I'm trying to upload multiple Test cases at one go. How to upload multiple Test cases at one time in ALM ?
All flow files which you would upload should be updated with name attribute.
Make sure the src folder has a properties file named as “multipleFlows.properties” or you would have to create it.
Update the multipleFlows.properties file with all the flow ids and flow xml path that you would like to upload through ALMSync as mentioned below.
Ex: multipleFlows.properties file should contain as below format
flow1_id=flow1_xml_path
flow2_id=flow2_xml_path
flow3_id=flow3_xml_path
flow4_id=flow4_xml_path
Open the Run Configuration ALMSync >> Arguments tab and update the arguments as
createTestCase flow_map multipleFlows

Placing file inside folder of S3 bucket

have a spring boot application, where I am tring to place a file inside folder of S3 target bucket. target-bucket/targetsystem-folder/file.csv
The targetsystem-folder name will differ for each file which will be retrived from yml configuration file.
The targetsystem-folder have to created via code if the folder doesnot exit and file should be placed under the folder
As I know, there is no folder concept in S3 bucket and all are stored as objects.
Have read in some documents like to place the file under folder, have to give the key-expression like targetsystem-folder/file.csv and bucket = target-bucket.
But it doesnot work out.Would like to achieve this using spring-integration-aws without using aws-sdk directly
<int-aws:s3-outbound-channel-adapter id="filesS3Mover"
channel="filesS3MoverChannel"
transfer-manager="transferManager"
bucket="${aws.s3.target.bucket}"
key-expression="headers.targetsystem-folder/headers.file_name"
command="UPLOAD">
</int-aws:s3-outbound-channel-adapter>
Can anyone guide on this issue
Your problem that the SpEL in the key-expression is wrong. Just try to start from the regular Java code and imagine how you would like to build such a value. Then you'll figure out that you are missing concatenation operation in your expression:
key-expression="headers.targetsystem-folder + '/' + headers.file_name"
Also, please, in the future provide more info about error. In most cases the stack trace is fully helpful.
In the project that I was working before, I just used the java aws sdk provided. Then in my implementation, I did something like this
private void uploadFileTos3bucket(String fileName, File file) {
s3client.putObject(new PutObjectRequest("target-bucket", "/targetsystem-folder/"+fileName, file)
.withCannedAcl(CannedAccessControlList.PublicRead));
}
I didn't create anymore configuration. It automatically creates /targetsystem-folder inside the bucket(then put the file inside of it), if it's not existing, else, put the file inside.
You can take this answer as reference, for further explanation of the subject.
There are no "sub-directories" in S3. There are buckets and there are
keys within buckets.
You can emulate traditional directories by using prefix searches. For
example, you can store the following keys in a bucket:
foo/bar1
foo/bar2
foo/bar3
blah/baz1
blah/baz2

AWS copying one object to another

I'm trying to copy data from one bucket to another using the ruby "aws-sdk" gem version 3.
My code is shown below:
temporary_object = #temporary_bucket.object(temporary_path)
permanent_object = #permanent_bucket.object(permanent_path)
temporary_object.copy_to(permanent_object)
However I keep getting the error Aws::S3::Errors::NoSuchKey: The specified key does not exist. Which makes sense as the permanent bucket doesn't exist at this moment however I thought that using copy_to creates the bucket if it does not exist.
Any advice would be very helpful.
Thanks

What are likely root causes of "Failed to list data bag items in data bag"?

I keep getting this error from Chef but can't find any documentation or other people who have had it.
What are the likely root causes?
Some more info would be helpful here. What workflow are you going down when you see this?
I'm going to make an assumption that it's not a knife call. I attempted to put in some debug around the source of your error in the chef-gem and called data bag list and data bag show. Neither seemed to hit the mixin code.
The following is the source of your error in the chef gem under mixin/language
def data_bag(bag)
DataBag.validate_name!(bag.to_s)
rbag = DataBag.load(bag)
rbag.keys
rescue Exception
Log.error("Failed to list data bag items in data bag: #{bag.inspect}")
raise
end
Now I'm at a loss as to what is accessing that mixin code because all other references to data_bag() in the gem refer to the code around the data_bag_item object.
Is this custom code you've created? Is there a chance you are referencing the wrong module?
You normally get this error when Chef cannot find the data bag "id"
Say I would like to load the following data_bag
data_bags
apps
mywebserver.json
apps
recipes
default.rb
[mywebserver.json]
{
"id": "mywebserver"
}
[default.rb]
data_bag_item("apps", "mywebserver") # The id specified in the json
I believe chef does not care for the data_bag_item file name but only cares for the "id" specified in one of the data bag item json files.

BIRT: Specifying XML Datasource file as parameter does not work

Using BIRT designer 3.7.1, it's easy enough to define a report for an XML file data source; however, the input file name is written into the .rptdesign file as constant value, initially. Nice for the start, but useless in real life. What I want is start the BIRT ReportEngine via the genReport.bat script, specifying the name of the XML data source file as parameter. That should be trivial, but it is surprisingly difficult...
What I found out is this: Instead of defining the XML data source file as a constant in the report definition you can use params["datasource"].value, which will be replaced by the parameter value at runtime. Also, in BIRT Designer you can define the Report Parameter (datasource) and give it a default value, say "file://d:/sample.xml".
Yet, it doesn't work. This is the result of my Preview attempt in Designer:
Cannot open the connection for the driver: org.eclipse.datatools.enablement.oda.xml.
org.eclipse.datatools.connectivity.oda.OdaException: The xml source file cannot be found or the URL is malformed.
ReportEngine, started with 'genReport.bat -p "datasource=file://d:/sample.xml" xx.rptdesign' says nearly the same.
Of course, I have made sure that the XML file exists, and tried different spellings of the file URL. So, what's wrong?
What I found out is this: Instead of defining the XML data source file as a constant in the report definition you can use params["datasource"].value, which will be replaced by the parameter value at runtime.
No, it won't - at least, if you specify the value of &XML Data Source File as params["datasource"].value (instead of a valid XML file path) at design time then you will get an error when attempting to run the report. This is because it is trying to use the literal string params["datasource"].value for the file path, rather than the value of params["datasource"].value.
Instead, you need to use an event handler script - specifically, a beforeOpen script.
To do this:
Left-click on your data source in the Data Explorer.
In the main Report Design pane, click on the Script tab (instead of the Layout tab). A blank beforeOpen script should be visible.
Copy and paste the following code into the script:
this.setExtensionProperty("FILELIST", params["datasource"].value);
If you now run the report, you should find that the value of the parameter datasource is used for the XML file location.
You can find out more about parameter-driven XML data sources on BIRT Exchange.
Since this is an old thread but still usefull, i ll add some info :
In the edit datasource, add some url to have sample data to create your dataset
Create your dataset
Then remove url as shown
add some script

Resources