How to get protobuf extension field in ProtobufAnnotationSerializer - protocol-buffers

I am a new to protocol-buffers and try to figure out how to extend a message type in the Stanford CoreNLP library as described here: https://nlp.stanford.edu/nlp/javadoc/javanlp/edu/stanford/nlp/pipeline/ProtobufAnnotationSerializer.html
The problem: I can set the extension field but i can't get it. I boiled the problem down to the code below. In the original message the field name is [edu.stanford.nlp.pipeline.myNewField] but is replaced by the field number 101 in the deserialized message.
How can i get the value of myNewField?
PS: This post https://stackoverflow.com/questions/28815214/how-to-set-get-protobufs-extension-field-in-go suggests that it should be as easy as calling getExtension(MyAppProtos.myNewField)
custom.proto
syntax = "proto2";
package edu.stanford.nlp.pipeline;
option java_package = "com.example.my.awesome.nlp.app";
option java_outer_classname = "MyAppProtos";
import "CoreNLP.proto";
extend Sentence {
optional uint32 myNewField = 101;
}
ProtoTest.java
import com.example.my.awesome.nlp.app.MyAppProtos;
import com.google.protobuf.ExtensionRegistry;
import com.google.protobuf.InvalidProtocolBufferException;
import edu.stanford.nlp.pipeline.CoreNLPProtos;
import edu.stanford.nlp.pipeline.CoreNLPProtos.Sentence;
public class ProtoTest {
static {
ExtensionRegistry registry = ExtensionRegistry.newInstance();
registry.add(MyAppProtos.myNewField);
CoreNLPProtos.registerAllExtensions(registry);
}
public static void main(String[] args) throws InvalidProtocolBufferException {
Sentence originalSentence = Sentence.newBuilder()
.setText("Hello world!")
.setTokenOffsetBegin(0)
.setTokenOffsetEnd(12)
.setExtension(MyAppProtos.myNewField, 13)
.build();
System.out.println("Original:\n" + originalSentence);
byte[] serialized = originalSentence.toByteArray();
Sentence deserializedSentence = Sentence.parseFrom(serialized);
System.out.println("Deserialized:\n" + deserializedSentence);
Integer myNewField = deserializedSentence.getExtension(MyAppProtos.myNewField);
System.out.println("MyNewField: " + myNewField);
}
}
Output:
Original:
tokenOffsetBegin: 0
tokenOffsetEnd: 12
text: "Hello world!"
[edu.stanford.nlp.pipeline.myNewField]: 13
Deserialized:
tokenOffsetBegin: 0
tokenOffsetEnd: 12
text: "Hello world!"
101: 13
MyNewField: 0
Update
Because this question was about extending CoreNLP message types and using them with the ProtobufAnnotationSerializer, here is what my extended serializer looks like:
import java.io.IOException;
import java.io.InputStream;
import java.util.Set;
import com.example.my.awesome.nlp.app.MyAppProtos;
import com.google.protobuf.ExtensionRegistry;
import edu.stanford.nlp.pipeline.Annotation;
import edu.stanford.nlp.pipeline.CoreNLPProtos;
import edu.stanford.nlp.pipeline.CoreNLPProtos.Sentence;
import edu.stanford.nlp.pipeline.CoreNLPProtos.Sentence.Builder;
import edu.stanford.nlp.pipeline.ProtobufAnnotationSerializer;
import edu.stanford.nlp.util.CoreMap;
import edu.stanford.nlp.util.Pair;
public class MySerializer extends ProtobufAnnotationSerializer {
private static ExtensionRegistry registry;
static {
registry = ExtensionRegistry.newInstance();
registry.add(MyAppProtos.myNewField);
CoreNLPProtos.registerAllExtensions(registry);
}
#Override
protected Builder toProtoBuilder(CoreMap sentence, Set<Class<?>> keysToSerialize) {
keysToSerialize.remove(MyAnnotation.class);
Builder builder = super.toProtoBuilder(sentence, keysToSerialize);
builder.setExtension(MyAppProtos.myNewField, 13);
return builder;
}
#Override
public Pair<Annotation, InputStream> read(InputStream is)
throws IOException, ClassNotFoundException, ClassCastException {
CoreNLPProtos.Document doc = CoreNLPProtos.Document.parseDelimitedFrom(is, registry);
return Pair.makePair(fromProto(doc), is);
}
#Override
protected CoreMap fromProtoNoTokens(Sentence proto) {
CoreMap result = super.fromProtoNoTokens(proto);
result.set(MyAnnotation.class, proto.getExtension(MyAppProtos.myNewField));
return result;
}
}

The mistake was that i didn't provide the parseFrom call with the extension registry.
Changing Sentence deserializedSentence = Sentence.parseFrom(serialized); to Sentence deserializedSentence = Sentence.parseFrom(serialized, registry); did the job!

Related

Issue with Arrays.stream(body.split(" ")).forEach

Trying to download files from apache.camel.
import java.io.IOException;
import java.io.InputStream;
import java.net.URL;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.nio.file.StandardCopyOption;
import java.util.Arrays;
import org.apache.camel.builder.RouteBuilder;
import org.springframework.stereotype.Component;
#Component
public class Download extends RouteBuilder{
#Override
public void configure() throws Exception {
from("file:files/input")
.routeId("downloadInitId")
.to("direct:downloadFiles");
from("direct:downloadFiles")
.routeId("downloadProcessId")
.process(p -> {
String body = p.getIn().getBody(String.class);
Arrays.stream(body.split(" ")).forEach(s -> {
try {
URL url = new URL(s);
File f = new File(s);
System.out.println(f.getName());
System.out.println(f.getPath());
InputStream in = url.openStream();
Files.copy(in, Paths.get(".\\files\\download\\output\\"+f.getName()), StandardCopyOption.REPLACE_EXISTING);
in.close();
} catch (IOException e) {
e.getMessage();
}
});
});
}
}
I have a files.txt with the content below, that I will dump on files\input folder,
https://github.com/a.zip
https://github.com/b.zip
The outuput will always be like this,
b.zip
https:\github.com\a.zip
https:\github.com\b.zip
Not sure why a.zip is always getting omitted therefore the first will not be downloaded.
Or if 3 lines, only last filename will be printed or downloaded.
Got it.
from("direct:downloadFiles")
.routeId("downloadProcessId")
.split(body().tokenize("\n")) // add this
.process(p -> {
String body = p.getIn().getBody(String.class);
Arrays.stream(body.split(" ")).forEach(s -> {
String sn = s.toString(); // add this
sn = sn.replace("\n", "").replace("\r", ""); // add this
try {
URL url = new URL(sn); // use sn
File f = new File(sn); // use sn

Beam Java pipeline with Python transform in GC Dataflow_v2

I'm trying to create a multi language pipeline in Google Dataflow_v2, creating a pipeline in Apache Beam Java SDK and using a transform in Python SDK.
My expansion service in Python runs correctly and when I run my Java pipeline it does reach my Python transform but then it throws me an error.
Failed to execute goal org.codehaus.mojo:exec-maven-plugin:3.0.0:java (default-cli) on project Job: An exception occured while executing the Java class. UNIMPLEMENTED: Method not found!
This is my expansion service with my transform in Python
import argparse, logging, signal, sys, grpc
import apache_beam as beam
from apache_beam.pipeline import PipelineOptions
from apache_beam.portability.api import beam_expansion_api_pb2_grpc
from apache_beam.runners.portability import expansion_service
from apache_beam.transforms import ptransform
from apache_beam.utils import thread_pool_executor
_LOGGER = logging.getLogger(__name__)
URN = "beam:transforms:xlang:pythontransform3"
class WriteToGS(beam.DoFn):
def process(self, element):
beam.io.WriteToText("gs://path/to/wordcount.txt")
#ptransform.PTransform.register_urn(URN, None)
class PythonTransform(ptransform.PTransform):
def __init__(self):
super().__init__()
def expand(self, pcoll):
_LOGGER.info('Python transform reached')
(pcoll | "Python transform" >> beam.ParDo(WriteToGS()))
def to_runner_api_parameter(self, unused_context):
return URN, None
def from_runner_api_parameter(
unused_ptransform, unused_paramter, unused_context):
return PythonTransform()
server = None
def cleanup(unused_signum, unused_frame):
_LOGGER.info('Shutting down expansion service.')
server.stop(None)
def main(unused_argv):
parser = argparse.ArgumentParser()
parser.add_argument(
'-p', '--port', type=int, help='port on which to serve the job api')
options = parser.parse_args()
global server
server = grpc.server(thread_pool_executor.shared_unbounded_instance())
beam_expansion_api_pb2_grpc.add_ExpansionServiceServicer_to_server(
expansion_service.ExpansionServiceServicer(
PipelineOptions(["--experiments", "beam_fn_api", "--sdk_location", "container"])),
server)
server.add_insecure_port('localhost:{}'.format(options.port))
server.start()
_LOGGER.info('Listening for expansion requests at %d', options.port)
signal.signal(signal.SIGTERM, cleanup)
signal.signal(signal.SIGINT, cleanup)
# blocking main thread forever.
signal.pause()
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
main(sys.argv)
This is my pipeline in Java
import org.apache.beam.runners.core.construction.External;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.io.TextIO;
import org.apache.beam.sdk.metrics.Counter;
import org.apache.beam.sdk.metrics.Distribution;
import org.apache.beam.sdk.metrics.Metrics;
import org.apache.beam.sdk.options.Default;
import org.apache.beam.sdk.options.Description;
import org.apache.beam.sdk.options.PipelineOptions;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.PTransform;
import org.apache.beam.sdk.transforms.ParDo;
import org.apache.beam.sdk.values.PCollection;
public class WordCount {
static class ExtractWordsFn extends DoFn<String, String> {
private final Counter emptyLines = Metrics.counter(ExtractWordsFn.class, "emptyLines");
private final Distribution lineLenDist =
Metrics.distribution(ExtractWordsFn.class, "lineLenDistro");
#ProcessElement
public void processElement(#Element String element, OutputReceiver<String> receiver) {
lineLenDist.update(element.length());
if (element.trim().isEmpty()) {
emptyLines.inc();
}
// Split the line into words.
String[] words = element.split("[^\\p{L}]+", -1);
// Output each word encountered into the output PCollection.
for (String word : words) {
if (!word.isEmpty()) {
receiver.output(word);
}
}
}
}
public static class CountWords
extends PTransform<PCollection<String>, PCollection<String>> {
#Override
public PCollection<String> expand(PCollection<String> lines) {
// Convert lines of text into individual words.
PCollection<String> words = lines.apply(ParDo.of(new ExtractWordsFn()));
return words;
}
}
public interface WordCountOptions extends PipelineOptions {
#Description("Path of the file to read from")
#Default.String("gs://apache-beam-samples/shakespeare/kinglear.txt")
String getInputFile();
void setInputFile(String value);
}
static void runWordCount(WordCountOptions options) {
Pipeline p = Pipeline.create(options);
p.apply("ReadLines", TextIO.read().from(options.getInputFile()))
.apply(new CountWords())
.apply("ExternalPythonTransform",
External.of("beam:transforms:xlang:pythontransform3", new byte [] {}, "localhost:9098"));
p.run().waitUntilFinish();
}
public static void main(String[] args) {
WordCountOptions options =
PipelineOptionsFactory.fromArgs(args).withValidation().as(WordCountOptions.class);
runWordCount(options);
}
}

Wiremock request templating in standalone mode: can I use a XML file as response template and inject value with XPATH?

I know that request template supports XPath, so that I can get value from request like {{xPath request.body '/outer/inner/text()'}}. I already have a XML file as response, and I want to inject this value I got from request, but keep the other parts of this response XML intact. For example, I want to inject it to XPATH /svc_result/slia/pos/msid.
And I need to use it in standalone mode.
I see another question(Wiremock Stand alone - How to manipulate response with request data) but that was with JSON, I have XML request/response.
How can it be done? Thanks.
For example, I have this definition of mapping:
{
"request": {
"method": "POST",
"bodyPatterns": [
{
"matchesXPath": {
"expression": "/svc_init/slir/msids/msid[#type='MSISDN']/text()",
"equalTo": "200853000105614"
}
},
{
"matchesXPath": "/svc_init/hdr/client[id and pwd]"
}
]
},
"response": {
"status": 200,
"bodyFileName": "slia.xml",
"headers": {
"Content-Type": "application/xml;charset=UTF-8"
}
}
}
And this request:
<?xml version="1.0"?>
<!DOCTYPE svc_init>
<svc_init ver="3.2.0">
<hdr ver="3.2.0">
<client>
<id>dummy</id>
<pwd>dummy</pwd>
</client>
</hdr>
<slir ver="3.2.0" res_type="SYNC">
<msids>
<msid type="MSISDN">200853000105614</msid>
</msids>
</slir>
</svc_init>
I expect this response, with xxxxxxxxxxx replaced with the <msid> in the request.
<?xml version="1.0" ?>
<!DOCTYPE svc_result SYSTEM "MLP_SVC_RESULT_320.DTD">
<svc_result ver="3.2.0">
<slia ver="3.0.0">
<pos>
<msid type="MSISDN" enc="ASC">xxxxxxxxxxx</msid>
<pd>
<time utc_off="+0800">20111122144915</time>
<shape>
<EllipticalArea srsName="www.epsg.org#4326">
<coord>
<X>00 01 01N</X>
<Y>016 31 53E</Y>
</coord>
<angle>0</angle>
<semiMajor>2091</semiMajor>
<semiMinor>2091</semiMinor>
<angularUnit>Degrees</angularUnit>
</EllipticalArea>
</shape>
<lev_conf>90</lev_conf>
</pd>
<gsm_net_param>
<cgi>
<mcc>100</mcc>
<mnc>01</mnc>
<lac>2222</lac>
<cellid>10002</cellid>
</cgi>
<neid>
<vmscid>
<vmscno>00004946000</vmscno>
</vmscid>
<vlrid>
<vlrno>99994946000</vlrno>
</vlrid>
</neid>
</gsm_net_param>
</pos>
</slia>
</svc_result>
My first thought was to use transformerParameters to change the response file by inserting the value from the body. Unfortunately, WireMock doesn't resolve the helpers before inserting them into the body response. So while we can reference that MSID value via an xpath helper like
{{xPath request.body '/svc_init/slir/msids/msid/text()'}}
if we try to insert that as a custom transformer parameter, it won't resolve. (I've written up an issue on the WireMock github about this.)
Unfortunately, I think this leaves us with having to write a custom extension that will take the request and find the value and then modify the response file. More information on creating a custom transformer extensions can be found here.
At last I created my own transformer:
package com.company.department.app.extensions;
import com.github.tomakehurst.wiremock.common.FileSource;
import com.github.tomakehurst.wiremock.extension.Parameters;
import com.github.tomakehurst.wiremock.extension.ResponseTransformer;
import com.github.tomakehurst.wiremock.http.Request;
import com.github.tomakehurst.wiremock.http.Response;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.w3c.dom.Document;
import org.w3c.dom.Node;
import org.xml.sax.EntityResolver;
import org.xml.sax.InputSource;
import org.xml.sax.SAXException;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.StringReader;
import java.io.StringWriter;
import java.time.Instant;
import java.time.LocalDateTime;
import java.time.ZoneId;
import java.time.ZonedDateTime;
import java.time.format.DateTimeFormatter;
import java.util.List;
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import javax.xml.transform.OutputKeys;
import javax.xml.transform.Transformer;
import javax.xml.transform.TransformerException;
import javax.xml.transform.TransformerFactory;
import javax.xml.transform.dom.DOMSource;
import javax.xml.transform.stream.StreamResult;
import javax.xml.xpath.XPath;
import javax.xml.xpath.XPathConstants;
import javax.xml.xpath.XPathExpressionException;
import javax.xml.xpath.XPathFactory;
public class NLGResponseTransformer extends ResponseTransformer {
private static final Logger LOG = LoggerFactory.getLogger(NLGResponseTransformer.class);
private static final String SLIA_FILE = "/stubs/__files/slia.xml";
private static final String REQ_IMSI_XPATH = "/svc_init/slir/msids/msid";
private static final String[] RES_IMSI_XPATHS = {
"/svc_result/slia/pos/msid",
"/svc_result/slia/company_mlp320_slia/company_netinfo/company_ms_netinfo/msid"
};
private static final String[] RES_TIME_XPATHS = {
// for slia.xml
"/svc_result/slia/company_mlp320_slia/company_netinfo/company_ms_netinfo/time",
// for slia_poserror.xml
"/svc_result/slia/pos/poserror/time"
};
private static final DocumentBuilderFactory DOCUMENT_BUILDER_FACTORY = DocumentBuilderFactory.newInstance();
private static final DateTimeFormatter TIME_FORMAT = DateTimeFormatter.ofPattern("yyyyMMddHHmmss");
private static final String UTC_OFF = "utc_off";
private static final String TRANSFORM_FACTORY_ATTRIBUTE_INDENT_NUMBER = "indent-number";
protected static final String COMPANY_MLP_320_SLIA_EXTENSION_DTD = "company_mlp320_slia_extension.dtd";
protected static final String MLP_SVC_RESULT_320_DTD = "MLP_SVC_RESULT_320.DTD";
#Override
public String getName() {
return "inject-request-values";
}
#Override
public Response transform(Request request, Response response, FileSource fileSource, Parameters parameters) {
Document responseDocument = injectValuesFromRequest(request);
String transformedResponse = transformToString(responseDocument);
if (transformedResponse == null) {
return response;
}
return Response.Builder.like(response)
.but()
.body(transformedResponse)
.build();
}
private Document injectValuesFromRequest(Request request) {
// NOTE: according to quickscan:
// "time" element in the MLP is the time MME reports cell_id to GMLC (NLG), NOT the time when MME got the cell_id.
LocalDateTime now = LocalDateTime.now();
Document responseTemplate = readDocument(SLIA_FILE);
Document requestDocument = readDocumentFromBytes(request.getBody());
if (responseTemplate == null || requestDocument == null) {
return null;
}
try {
injectIMSI(responseTemplate, requestDocument);
injectTime(responseTemplate, now);
} catch (XPathExpressionException e) {
LOG.error("Cannot parse XPath expression {}. Cause: ", REQ_IMSI_XPATH, e);
}
return responseTemplate;
}
private Document readDocument(String inputStreamPath) {
try {
DocumentBuilder builder = DOCUMENT_BUILDER_FACTORY.newDocumentBuilder();
// ignore missing dtd
builder.setEntityResolver((publicId, systemId) -> {
if (systemId.contains(COMPANY_MLP_320_SLIA_EXTENSION_DTD) ||
systemId.contains(MLP_SVC_RESULT_320_DTD)) {
return new InputSource(new StringReader(""));
} else {
return null;
}
});
return builder.parse(this.getClass().getResourceAsStream(inputStreamPath));
} catch (Exception e) {
LOG.error("Cannot construct document from resource path. ", e);
return null;
}
}
private Document readDocumentFromBytes(byte[] array) {
try {
DocumentBuilder builder = DOCUMENT_BUILDER_FACTORY.newDocumentBuilder();
// ignore missing dtd
builder.setEntityResolver((publicId, systemId) -> {
if (systemId.contains(COMPANY_MLP_320_SLIA_EXTENSION_DTD) ||
systemId.contains(MLP_SVC_RESULT_320_DTD)) {
return new InputSource(new StringReader(""));
} else {
return null;
}
});
return builder.parse(new ByteArrayInputStream(array));
} catch (Exception e) {
LOG.error("Cannot construct document from byte array. ", e);
return null;
}
}
private XPath newXPath() {
return XPathFactory.newInstance().newXPath();
}
private void injectTime(Document responseTemplate, LocalDateTime now) throws XPathExpressionException {
for (String timeXPath: RES_TIME_XPATHS) {
Node timeTarget = (Node) (newXPath().evaluate(timeXPath, responseTemplate, XPathConstants.NODE));
if (timeTarget != null) {
// set offset in attribute
Node offset = timeTarget.getAttributes().getNamedItem(UTC_OFF);
offset.setNodeValue(getOffsetString());
// set value
timeTarget.setTextContent(TIME_FORMAT.format(now));
}
}
}
private void injectIMSI(Document responseTemplate, Document requestDocument) throws XPathExpressionException {
Node imsiSource = (Node) (newXPath().evaluate(REQ_IMSI_XPATH, requestDocument, XPathConstants.NODE));
String imsi = imsiSource.getTextContent();
for (String xpath : RES_IMSI_XPATHS) {
Node imsiTarget = (Node) (newXPath().evaluate(xpath, responseTemplate, XPathConstants.NODE));
if (imsiTarget != null) {
imsiTarget.setTextContent(imsi);
}
}
}
private String transformToString(Document document) {
if (document == null) {
return null;
}
document.setXmlStandalone(true); // make document to be standalone, so we can avoid outputing standalone="no" in first line
TransformerFactory tf = TransformerFactory.newInstance();
Transformer trans;
try {
trans = tf.newTransformer();
trans.setOutputProperty(OutputKeys.INDENT, "no"); // no extra indent; file already has intent of 4
// cannot find a workaround to inject dtd in doctype line. TODO
//trans.setOutputProperty(OutputKeys.DOCTYPE_SYSTEM, "MLP_SVC_RESULT_320.DTD [<!ENTITY % extension SYSTEM \"company_mlp320_slia_extension.dtd\"> %extension;]");
StringWriter sw = new StringWriter();
trans.transform(new DOMSource(document), new StreamResult(sw));
// Spaces between tags are considered as text node, so when outputing we need to remove the extra empty lines
return sw.toString().replaceAll("\\n\\s*\\n", "\n");
} catch (TransformerException e) {
LOG.error("Cannot transform response document to String. ", e);
return null;
}
}
/**
* Compare system default timezone with UTC and get zone offset in form of (+/-)XXXX.
* Dependent on the machine default timezone/locale.
* #return
*/
private String getOffsetString() {
// getting offset in (+/-)XX:XX format, or "Z" if is UTC
String offset = ZonedDateTime.ofInstant(Instant.now(), ZoneId.systemDefault()).getOffset().toString();
if (offset.equals("Z")) {
return "+0000";
}
return offset.replace(":", "");
}
}
And use it like this:
mvn package it as a JAR(non-runnable), put it aside wiremock standalone jar, for example libs
Run this:
java -cp libs/* com.github.tomakehurst.wiremock.standalone.WireMockServerRunner --extensions com.company.department.app.extensions NLGResponseTransformer --https-port 8443 --verbose
Put the whole command on the same line.
Notice the app jar which contains this transformer and wiremock standalone jar should be among classpath. Also, other dependencies under libs are needed. (I use jib maven plugin which copies all dependencies under libs/; I also move app and wiremock jars to libs/, so I can put "-cp libs/*"). If that does not work, try to specify the location of these two jars in -cp. Be ware that Wiremock will runs OK even when the extension class is not found. So maybe add some loggings.
You can use --root-dir to point to stubs files root, for example --root-dir resources/stubs in my case. By default it points to .(where java runs).

BDD Jbehave stories while executing results in Pending

Recently I started working on BDD using JBehave.
So far if I run using maven, my maven project is getting successfully build. And then its coming into the story file but then its not proceeding further.
I tried by running with junit but I am getting the same result..
I think my problem is with executor file.
I searched in many sites and even Jbehave.org and many stackoverflow queries..But in vain
Help me to come out of this problem...Let me know if you need any additional information
I spent so much time rectifying this.But couldn't able to find the solution.
Here is my runner file..
package runnerFile;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import org.jbehave.core.configuration.Configuration;
import org.jbehave.core.configuration.MostUsefulConfiguration;
import org.jbehave.core.io.CodeLocations;
import org.jbehave.core.io.LoadFromClasspath;
import org.jbehave.core.io.StoryFinder;
import org.jbehave.core.junit.JUnitStories;
import org.jbehave.core.junit.JUnitStory;
import org.jbehave.core.reporters.Format;
import org.jbehave.core.reporters.StoryReporterBuilder;
import org.jbehave.core.steps.InjectableStepsFactory;
import org.jbehave.core.steps.InstanceStepsFactory;
import org.jbehave.core.steps.ScanningStepsFactory;
import org.jbehave.core.steps.Steps;
public class TestRunner extends JUnitStories{
#Override
public Configuration configuration() {
return new MostUsefulConfiguration()
.useStoryLoader(
new LoadFromClasspath(this.getClass().getClassLoader()))
.useStoryReporterBuilder(
new StoryReporterBuilder()
.withDefaultFormats()
.withFormats(Format.HTML, Format.CONSOLE)
.withRelativeDirectory("jbehave-report")
);
}
#Override
public InjectableStepsFactory stepsFactory() {
// ArrayList<Object> stepFileList = new ArrayList<Object>();
ArrayList<Steps> stepFileList = new ArrayList<Steps>();
stepFileList.add(new Steps(configuration()));
return new InstanceStepsFactory(configuration(), stepFileList);
//return new ScanningStepsFactory(configuration(), "org.jbehave.examples.core.steps", "my.other.steps"`enter code here` ).matchingNames(".*Steps").notMatchingNames(".*SkipSteps");
}
#Override
protected List<String> storyPaths() {
return new StoryFinder().
findPaths(CodeLocations.codeLocationFromClass(
this.getClass()),
Arrays.asList("**/TC_2.story"),
Arrays.asList(""));
}
}
I kept my story file inside src/test/resources . and step definition inside src/test/java
****story:****
**src/test/resources**
Narrative:
In order to communicate effectively to the business some functionality
As a development team
I want to use Behaviour-Driven Development
Scenario: A scenario is a collection of executable steps of different type
Given I launch the url
When I login with username <Username> and password <Password>
Then I should see the homepage
Examples:
|Username|Password|
|test#gmail.com|test1234|
**stepDefinition**
**src/test/java:**
package definition;
import org.jbehave.core.annotations.Given;
import org.jbehave.core.annotations.Named;
import org.jbehave.core.annotations.Then;
import org.jbehave.core.annotations.When;
import pages.Homepage_Pages;
public class HomePage {
Homepage_Pages home;
#Given("I launch the url")
public void url()
{
home.launchUrl();
}
#When("I login with username <Username> and password <Password>")
public void login(#Named("Username") String Username, #Named("Password") String Password)
{
System.out.println(Username);
}
#Then("I should see the homepage")
public void homePageVerification()
{
System.out.println("Heello");
}
}
Maven Console:
Try the following code, which is a stripped-down simple testrunner that does nothing fancy, but simply runs all stories found in sub-folders of the main folder, and includes all step classes in the define steps files location. My original had a lot of those things hard-coded but I changed them to final Strings so it should be easy enough to replace your situation and run with this file. Obviously, change "com.yourpackage.steps" with whatever package folder you place your steps files in. Hope this helps.
package testrunner;
import java.io.File;
import java.util.ArrayList;
import java.util.List;
import org.jbehave.core.configuration.Configuration;
import org.jbehave.core.configuration.MostUsefulConfiguration;
import org.jbehave.core.embedder.EmbedderControls;
import org.jbehave.core.io.CodeLocations;
import org.jbehave.core.io.StoryFinder;
import org.jbehave.core.junit.JUnitStories;
import org.jbehave.core.reporters.CrossReference;
import org.jbehave.core.reporters.Format;
import org.jbehave.core.reporters.StoryReporterBuilder;
import org.jbehave.core.steps.InjectableStepsFactory;
import org.jbehave.core.steps.InstanceStepsFactory;
import org.junit.runner.RunWith;
import de.codecentric.jbehave.junit.monitoring.JUnitReportingRunner;
#RunWith(JUnitReportingRunner.class)
public class TestRunner extends JUnitStories {
private Configuration configuration;
public TestRunner() {
super();
CrossReference crossReference = new CrossReference();
configuration = new MostUsefulConfiguration();
configuration.useStoryReporterBuilder(
new StoryReporterBuilder().withFormats(Format.HTML, Format.STATS, Format.CONSOLE)
.withCodeLocation(CodeLocations.codeLocationFromPath("target/."))
.withCrossReference(crossReference));
EmbedderControls embedderControls = configuredEmbedder().embedderControls();
embedderControls.doBatch(false);
embedderControls.doGenerateViewAfterStories(true);
embedderControls.doSkip(false);
embedderControls.doVerboseFailures(true);
embedderControls.doVerboseFiltering(true);
embedderControls.useThreads(1);
embedderControls.useStoryTimeouts("1800");
}
#Override
protected List<String> storyPaths()
{
return new StoryFinder().findPaths(CodeLocations.codeLocationFromClass(this.getClass()), "**/*.story", "");
}
#Override
public Configuration configuration() {
return configuration;
}
#Override
public InjectableStepsFactory stepsFactory() {
final String stepsPackage = "com.yourpackage.steps";
final String stepsLoc = "src/test/java/" + stepsPackage.replace(".", "/");
List<Object> stepList = new ArrayList<Object>();
File steps = new File(stepsLoc);
File[] fileList = steps.listFiles();
int size = fileList.length;
for (int i = 0; i < size; i++) {
if (fileList[i].isFile()) { // also returns folders (directories)
String value = fileList[i].getName().replace(".java", ""); // strip extensions
if (!value.toLowerCase().contains("testrunner")) { // ignore testrunner itself
try {
Object stepObject = Class.forName((stepsPackage + "." + value)).newInstance();
stepList.add(stepObject);
} catch (InstantiationException e) {
e.printStackTrace();
} catch (IllegalAccessException e) {
e.printStackTrace();
} catch (ClassNotFoundException e) {
e.printStackTrace();
}
}
}
}
return new InstanceStepsFactory(configuration(), stepList);
}
}

incompatible types: Object cannot be converted to CoreLabel

I'm trying to use the Stanford tokenizer with the following example from their website:
import java.io.FileReader;
import java.io.IOException;
import java.util.List;
import edu.stanford.nlp.ling.CoreLabel;
import edu.stanford.nlp.ling.HasWord;
import edu.stanford.nlp.process.CoreLabelTokenFactory;
import edu.stanford.nlp.process.DocumentPreprocessor;
import edu.stanford.nlp.process.PTBTokenizer;
public class TokenizerDemo {
public static void main(String[] args) throws IOException {
for (String arg : args) {
// option #1: By sentence.
DocumentPreprocessor dp = new DocumentPreprocessor(arg);
for (List sentence : dp) {
System.out.println(sentence);
}
// option #2: By token
PTBTokenizer ptbt = new PTBTokenizer(new FileReader(arg),
new CoreLabelTokenFactory(), "");
for (CoreLabel label; ptbt.hasNext(); ) {
label = ptbt.next();
System.out.println(label);
}
}
}
}
and I get the following error when I try to compile it:
TokenizerDemo.java:24: error: incompatible types: Object cannot be converted to CoreLabel
label = ptbt.next();
Does anyone know what the reason might be? In case you are interested, I'm using Java 1.8 and made sure that CLASSPATH contains the jar file.
Try parameterizing the PTBTokenizer class. For example:
PTBTokenizer<CoreLabel> ptbt = new PTBTokenizer<>(new FileReader(arg),
new CoreLabelTokenFactory(), "");

Resources