How A Bundle can access OSGi output stream? - osgi

Is there any way for a bundle to print an string in OSGi output stream?
I mean like System.out.println("String");. Instead of this I want the bundle to print its strings in that stream.
public void start(BundleContext bundleContext) throws Exception {
Activator.context = bundleContext;
System.out.println("Hello World!"); // I want to print this string in osgi console.
}
You see, if I run the OSGi framework it will print its responses to commands in Java Console, where System.out prints as well.
But my problem is that, I'm printing its outputs in a JTextArea, so I want bundles to be able to print there too (Print its strings in OSGi Console output stream). in this case I need a way to access the OSGi output stream.

If I understand you correctly, your JTextArea serves as a console or a view for the console output. So I'd suggest to just display the System.out stream in that JTextArea. Here's an example to achieve this: http://unserializableone.blogspot.com/2009/01/redirecting-systemout-and-systemerr-to.html

I guess I don't understand the question. That will write to where ever System.out is directed. If you start an OSGi framework from the command line, that should write to the terminal session.

Related

Provision custom-named SQS Queue with PCF Service Broker

I'm trying to create a new queue, but when using
cf create-service aws-sqs standard my-q
the name of the queue in AWS is automatically assigned and is just an id composed of random letters and numbers.
This is fine when using the normal java client. However, we want to use spring-cloud-aws-messaging (#SqsListener annotation), because it offers us deletion policies out of the box, and a way to extend visibility, so that we can implement retries easily.
#SqsListener(value = "my-q", deletionPolicy = SqsMessageDeletionPolicy.ON_SUCCESS)
public void listen(TestItem item, Visibility visibility) {
log.info("received message: " + item);
//do business logic
//if call fails
visibility.extend(1000);
//throw exception
//if no failure, message will be dropped
}
The queue name on the annotation is declared, so we can't change it dynamically after reading the VCAP_SERVICE environment variable injected by PCF on the application.
The only alternative we can think of is use reflection to set accessibility on value of the annotation, and set the value to the name on the VCAP_SERVICE, but that's just nasty, and we'd like to avoid it if possible.
Is there any way to change the name of the queue to something specific on creation? This suggests that it's possible, as seen below:
cf create-service aws-sqs standard my-q -c '{ "CreateQueue": { "QueueName": “my-q”, "Attributes": { "MaximumMessageSize": "1024"} } }'
However, this doesn't work. It returns:
Incorrect Usage: Invalid configuration provided for -c flag. Please
provide a valid JSON object or path to a file containing a valid JSON
object.
How do I set the name on creation of the queue? Or the only way to achieve my end goal is to use reflection?
EDIT: As pointed out by Daniel Mikusa, the double quotes were not real double quotes, and that was causing the error. The command is successful now, however it doesn't create the queue with the intended name. I'm now wondering if this name needs to be set on bind-service instead. The command has a -c option too but I cannot find any documentation to support which parameters are available for a aws-sqs service.

Empty output when reproducing Chinese coreference results on Conll-2012 using CoreNLP Neural System

Following the instructions on this page https://stanfordnlp.github.io/CoreNLP/coref.html#running-on-conll-2012, Here's my code when I tried to reproduce Chinese coreference results on Conll-2012:
public class TestCoref {
public static void main(String[] args) throws Exception {
Properties props = StringUtils.argsToProperties(args);
props.setProperty("props", "edu/stanford/nlp/coref/properties/neural-chinese-conll.properties");
props.setProperty("coref.data", "path-to/data/conll-2012");
props.setProperty("coref.conllOutputPath", "path-to-output/conll-results");
props.setProperty("coref.scorer", "path-to/reference-coreference-scorers/v8.01/scorer.pl");
CorefSystem coref = new CorefSystem(props);
coref.runOnConll(props);
}
}
As output, I got 3 files like these:
"date-time.coref.predicted.txt
date-time.coref.gold.txt
date-time.predicted.txt"
but all of them are EMPTY!
I got my "conll-2012" data as follows:
First I downloaded train/dev/test-key data from this page http://conll.cemantix.org/2012/data.html, as well as the ontonote-release-5.0 from LDC. Then I ran the script skeleton2conll.sh provided with the official conll 2012 data which produced _conll files.
the model I used is downloaded here http://nlp.stanford.edu/software/stanford-chinese-corenlp-models-current.jar
When I tried to find the problem, I noticed that there exists a function "annotate" in the class CorefSystem which seems to do the real job, but it is not used at all. https://github.com/stanfordnlp/CoreNLP/blob/master/src/edu/stanford/nlp/coref/CorefSystem.java
I wonder if there is a bug in runOnConll function which doesn't read an annotate anything, or how could I reproduce the coreference results?
PS:
I especially want to produce some results on conversational data like "tc" and "bc" in conll-2012. I find that using the coreference API, I can only parse textual data. Is there any other way to use Neural Coref System on conversational data (where different speakers should be indicated) apart from running on Conll-2012?
thanks in advance for help!
As a start, why don't you run this command from the command line:
java -Xmx10g -cp stanford-corenlp-3.9.1.jar:stanford-chine-corenlp-models-3.9.1.jar:* edu.stanford.nlp.coref.CorefSystem -props edu/stanford/nlp/coref/properties/neural-chinese-conll.properties -coref.data <path-to-conll-data> -coref.conllOutputPath <where-to-save-system-output> -coref.scorer <path-to-scoring-script>

Rest camel passing objects between endpoints

Overview.
My camel setup calls two service methods. the response of the first one is passed into the second one and then output the final response as json webpage. Fairly simple nothing too complicated.
Further breakdown to give some more context.
Method_1. Takes in scanId. This works ok. It produces an object called ScheduledScan .class
Method_2. Takes in object previous instance of ScheduledScan .class and returns a list of ConvertedScans scans. Then would like to display said list
Description of the code
#Override
public void configure() throws Exception {
restConfiguration().bindingMode(RestBindingMode.json);
rest("/publish")
.get("/scheduled-scan/{scanId}")
.to("bean:SentinelImportService?method=getScheduledScan").outType(ScheduledScan .class)
.to("bean:SentinelImportService?method=convertScheduledScan");
}
The methods that are called look like the following
ScheduledScan getScheduledScan(#Header("scanId") long scanId);
List<ConvertedScans > convertScheduledScan(#Body ScheduledScan scheduledScans);
It is returning the the following error
No body available of type: path. .ScheduledScan but has value:
of type: java.lang.String on: HttpMessage#0x63c2fd04. Caused by: No type converter available
The following runs without the error, i.e. without method 2. So I think im almost there.
rest("/publish")
.get("/scheduled-scan/{scanId}")
.to("bean:SentinelImportService?method=getScheduledScan");
Now from reading the error it looks like im passing in a HttpMessage not the java object? I'm a bit confused about what to do next? Any advice much appreciated.
I have found some similar questions to this message. However I am looking to pass the java object directly into the service method.
camel-rest-bean-chaining
how-to-share-an-object-between-methods-on-different-camel-routes
You should setup the outType as the last output, eg what the REST response is, that is a List/Array and not a single pojo. So use .outTypeList(ConvertedScans.class) instead.

List all active Configuration for Config Admin in gogo shell

I would like to show on screen the Configuration list returned by org.osgi.service.cm.ConfigurationAdmin.listConfigurations method via gogo shell. I tried with the following:
g! _sref = $.context getServiceReference "org.osgi.service.cm.ConfigurationAdmin"
g! _srv = $.context getService $_sref
g! $_srv listConfigurations
but it fails with the following error:
gogo: IllegalArgumentException: Cannot coerce listconfigurations() to any of [(String)]
What is the right syntax here? Is it possible to do that?
Thanks!
The listConfigurations method takes a String parameter, which is a filter. If you just want an unfiltered list, then you can pass null, e.g.:
$_srv listConfigurations null
This returns an array of Configuration objects, which you will probably want to iterate over with the each command.
However this kind of thing quickly gets too complex for Gogo scripting. For example you're not releasing the service reference with ungetService anywhere. It's probably better to build a reusable Gogo command in Java as a Declarative Services component.
It's probably a lot easier to use the following shell commands to achieve that:
https://bitbucket.org/pjtr/net.luminis.cmc
Which has, amongst other things, a command called:
cm list

Spring Integration FTP - poll without transfer?

I'd like to utilize Spring Integration to initiate messages about files that appear in a remote location, without actually transferring them. All I require is the generation of a Message with, say, header values indicating the path to the file and filename.
What's the best way to accomplish this? I've tried stringing together an FTP inbound channel adapter with a service activator to write the header values I need, but this causes the file to be transferred to a local temp directory, and by the time the service activator sees it, the message consists of a java.io.File that refers to the local file and the remote path info is gone. It is possible to transform the message prior to this local transfer occurring?
We have similar problem and we solved it with filters. On inbound-channel-adapter you can set custom filter implementation. So before polling your filter will be called and you will have all informations about files, from which you can decide will that file be downloaded or not, for example;
<int-sftp:inbound-channel-adapter id="test"
session-factory="sftpSessionFactory"
channel="testChannel"
remote-directory="${sftp.remote.dir}"
local-directory="${sftp.local.dir}"
filter="customFilter"
delete-remote-files="false">
<int:poller trigger="pollingTrigger" max-messages-per-poll="${sftp.max.msg}"/>
</int-sftp:inbound-channel-adapter>
<beans:bean id="customFilter" class="your.class.location.SftpRemoteFilter"/>
Filter class is just implementation of the FileListFilter interface. Here it is dummy filter implementation.
public class SftpRemoteFilter implements FileListFilter<LsEntry> {
private static final Logger log = LoggerFactory.getLogger(SftpRemoteFilter.class);
#Override
public final List<LsEntry> filterFiles(LsEntry[] files) {
log.info("Here is files.");
//Do something smart
return Collections.emptyList();
}
}
But if you want to do that as you described, I think it is possible to do it by setting headers on payloads and then using same headers when you are using that payload, but in that case you should use Message<File> instead File in your service activator method.

Resources