I have slight issue with step customization. I want to add some specific data from Spring context to JBehave report after successful step executing, e.g. I have step:
When login as random user
I want to see in report if all was good something like
When login as random user (%username%)
I found how to execute any logic before/after story/scenario, but I can't find correct way how to add any logic after step and how can I customize/extend basic JBehave steps.
Thank you in advance.
Use StoryReporter API:
import org.jbehave.core.reporters.NullStoryReporter;
public class MyCustomStoryReporter extends NullStoryReporter {
#Override
public void beforeStep(String step) {
// add "before-step" logic here
}
#Override
public void successful(String step) {
// add "after-passed-step" logic here
}
#Override
public void failed(String step, Throwable cause) {
// add "after-failed-step" logic here
}
}
More information on StoryReporter and its configuration can be found in the official documentation: Reporting Stories
Related
I'm using XQuery 3.0 to transform an incoming message to fit my system.
The XQuery is called from an Apache Camel Route via the transform EIP.
Example:
transform().xquery("resource:classpath:xquery/myxquery.xquery",String.class)
While the transformation works without problems it would be nice, since it's partly very complex, to be able to log some informations directly during the transformation process.
So I wanted to ask if it is possible to log "into" logback directly from XQuery?
I already searched stackoverflow and of course https://www.w3.org/TR/xquery-30-use-cases/ and other sources, but I just couldn't find any information about how to log in Xquery.
My project structure is:
Spring-Boot 2 application
Apache-Camel as Routing framework
Logback as Logging framework
Update: For the integration of XQuery in the Apache-Camel Framework I use the org.apache.camel:camel-saxon-starter:2.22.2.
Update: Because the use of fn:trace was kind of ugly I searched further and now I use the extension mechanism from Saxon to provide different logging functions which can be accessed via xquery:
For more information see the documentation: http://www.saxonica.com/documentation/#!extensibility/integratedfunctions/ext-full-J
Here is what I did for logging (tested with Saxon-HE, Camel is not mandatory, I just use it by coincidence):
First step:
Extend the class net.sf.saxon.lib.ExtensionFunctionDefinition
public class XQueryInfoLogFunctionDefinition extends ExtensionFunctionDefinition{
private static final Logger log = LoggerFactory.getLogger(XQueryInfoLogFunctionDefinition.class);
private final XQueryInfoExtensionFunctionCall functionCall = new XQueryInfoExtensionFunctionCall();
private static final String PREFIX = "log";
#Override
public StructuredQName getFunctionQName() {
return new StructuredQName(PREFIX, "http://thehandofnod.com/saxon-extension", "info");
}
#Override
public SequenceType[] getArgumentTypes() {
return new SequenceType[] { SequenceType.SINGLE_STRING };
}
#Override
public SequenceType getResultType(SequenceType[] suppliedArgumentTypes) {
return SequenceType.VOID;
}
#Override
public ExtensionFunctionCall makeCallExpression() {
return functionCall;
}
}
Second step:
Implement the FunctionCall class
public class XQueryInfoExtensionFunctionCall extends ExtensionFunctionCall {
private static final Logger log = LoggerFactory.getLogger(XQueryInfoLogFunctionDefinition.class);
#Override
public Sequence call(XPathContext context, Sequence[] arguments) throws XPathException {
if (arguments != null && arguments.length > 0) {
log.info(((StringValue) arguments[0]).getStringValue());
} else
throw new IllegalArgumentException("We need a message");
return EmptySequence.getInstance();
}
}
Third step:
Configure the SaxonConfiguration and bind it into the camel context:
public static void main(String... args) throws Exception {
Main main = new Main();
Configuration saxonConfig = Configuration.newConfiguration();
saxonConfig.registerExtensionFunction(new XQueryInfoLogFunctionDefinition());
main.bind("saxonConfig", saxonConfig);
main.addRouteBuilder(new MyRouteBuilder());
main.run(args);
}
Fourth step:
Define the SaxonConfig in your XQueryEndpoint:
.to("xquery:test.xquery?configuration=#saxonConfig");
Fifth step:
Call it in your xquery:
declare namespace log="http://thehandofnod.com/saxon-extension";
log:info("Das ist ein INFO test")
Original post a.k.a How to overwrite the fn:trace Funktion:
Thanks to Martin Honnen I tried the fn:trace function. Problem was that by default it logs into the System.err Printstream and that's not what I wanted, because I wanted to combine the fn:trace function with the Logback Logging-Framework.
So I debugged the net.sf.saxon.functions.Trace methods and came to the following solution for my project setup.
Write a custom TraceListener which extends from net.sf.saxon.trace.XQueryTraceListener and implement the methods enter and leave in a way that the InstructionInfo with constructType == 2041 (for user-trace) is forwarded to the SLF4J-API. Example (for only logging the message):
#Override
public void enter(InstructionInfo info, XPathContext context) {
// no call to super to keep it simple.
String nachricht = (String) info.getProperty("label");
if (info.getConstructType() == 2041 && StringUtils.hasText(nachricht)) {
getLogger().info(nachricht);
}
}
#Override
public void leave(InstructionInfo info) {
// no call to super to keep it simple.
}
set the custom trace listener into your net.sf.saxon.Configuration Bean via setTraceListener
Call your xquery file from camel via the XQueryEndpoint because only there it is possible to overwrite the Configuration with an option: .to("xquery:/xquery/myxquery.xquery?configuration=#saxonConf"). Unfortunately the transform().xquery(...) uses it's own objects without the possibility to configure them.
call {fn:trace($element/text(),"Das ist ein Tracing Test")} in your xquery and see the message in your log.
I'm writing a little extension that tells me in my log when a test starts, so i know which logs are related to which tests:
public class LoggingExtension implements Extension, BeforeEachCallback, AfterTestExecutionCallback {
protected final Logger log = LoggerFactory.getLogger(getClass());
#Override
public void beforeEach(final ExtensionContext context) throws Exception {
log.info("-- Test #before: {}::{} ----------------------------------------",
context.getDisplayName(),
context.getTestClass().map(x -> x.getSimpleName()).orElse("no test class available"));
}
/**
* (non-Javadoc) ${see_to_overridden}
*/
#Override
public void afterTestExecution(final ExtensionContext context) throws Exception {
context.getExecutionException()
.ifPresent(ex -> {
log.error("-- Test #after: {}::{} ----------------------------------------",
context.getDisplayName(),
context.getTestClass().map(x -> x.getSimpleName()).orElse("no test class available"),
ex);
// log.error("", ex);
});
}
}
I wanted to change this like so:
log -- Test #start: ... when the test itself starts (ie. use BeforeTestExecutionCallback)
and use the BeforeEachCallback to mark the start of #BeforeEach execution(s) but only iff there is actually before-code being executed, as to avoid cluttering.
So the question is: How can i tell if there are actually 1..n #BeforeEach methods that are being executed?
I investigated the the ExtensionContext but came up empty.
So the question is: How can i tell if there are actually 1..n #BeforeEach methods that are being executed?
As of JUnit Jupiter 5.4, there is no official way to find that out. That information is not exposed in any user-facing API: it's internal to the JUnit Jupiter TestEngine.
However, the new InvocationInterceptor extension API coming in JUnit Jupiter 5.5 will provide a way to determine if a #BeforeEach method is about to be executed.
I've been trying to override Stormpath's RequestEventListenerAdapter methods to populate an account's Custom Data when the user logs in or creates an account.
I created a class that extends RequestEventListenerAdapter and am trying to override the on SuccessfulAuthenticationRequestEvent and the on LogoutRequestEvent to make some simple outputs to the console to test if they are working (A simple "Hello world!" for example). But when I do any of these actions on the application, none of these events are triggering. So I was wondering if anyone here could help me out, I'm not sure if the bean I'm supposed to declare is in the right place or if I'm missing some kind of configuration for the events to trigger. Thanks for any help and let me know if more information is needed.
This is my custom class:
import com.stormpath.sdk.servlet.authc.LogoutRequestEvent;
import com.stormpath.sdk.servlet.authc.SuccessfulAuthenticationRequestEvent;
import com.stormpath.sdk.servlet.event.RequestEventListenerAdapter;
public class CustomRequestEventListener extends RequestEventListenerAdapter {
#Override
public void on(SuccessfulAuthenticationRequestEvent e) {
System.out.println("Received successful authentication request event: {}\n" + e);
}
#Override
public void on(LogoutRequestEvent e) {
System.out.println("Received logout request event: {}\n" + e);
}
}
This is the bean that I'm not sure where to place:
#Bean
public RequestEventListener stormpathRequestEventListener() {
return new CustomRequestEventListener();
}
What you are doing looks exactly right. I have created a sample project demonstrating how to get things working. You could take a look at it (it is very simple) and compare it with what you have.
I also added instructions on how to get it running so you can see that it does indeed work.
In the olden days, we had ThreadLocal for programs to carry data along with the request path since all request processing was done on that thread and stuff like Logback used this with MDC.put("requestId", getNewRequestId());
Then Scala and functional programming came along and Futures came along and with them came Local.scala (at least I know the twitter Futures have this class). Future.scala knows about Local.scala and transfers the context through all the map/flatMap, etc. etc. functionality such that I can still do Local.set("requestId", getNewRequestId()); and then downstream after it has travelled over many threads, I can still access it with Local.get(...)
Soooo, my question is in Java, can I do the same thing with the new CompletableFuture somewhere with LocalContext or some object (not sure of the name) and in this way, I can modify Logback MDC context to store it in that context instead of a ThreadLocal such that I don't lose the request id and all my logs across the thenApply, thenAccept, etc. etc. still work just fine with logging and the -XrequestId flag in Logback configuration.
EDIT:
As an example. If you have a request come in and you are using Log4j or Logback, in a filter, you will set MDC.put("requestId", requestId) and then in your app, you will log many log statements line this:
log.info("request came in for url="+url);
log.info("request is complete");
Now, in the log output it will show this:
INFO {time}: requestId425 request came in for url=/mypath
INFO {time}: requestId425 request is complete
This is using a trick of ThreadLocal to achieve this. At Twitter, we use Scala and Twitter Futures in Scala along with a Local.scala class. Local.scala and Future.scala are tied together in that we can achieve the above scenario still which is very nice and all our log statements can log the request id so the developer never has to remember to log the request id and you can trace through a single customers request response cycle with that id.
I don't see this in Java :( which is very unfortunate as there are many use cases for that. Perhaps there is something I am not seeing though?
If you come across this, just poke the thread here
http://mail.openjdk.java.net/pipermail/core-libs-dev/2017-May/047867.html
to implement something like twitter Futures which transfer Locals (Much like ThreadLocal but transfers state).
See the def respond() method in here and how it calls Locals.save() and Locals.restort()
https://github.com/simonratner/twitter-util/blob/master/util-core/src/main/scala/com/twitter/util/Future.scala
If Java Authors would fix this, then the MDC in logback would work across all 3rd party libraries. Until then, IT WILL NOT WORK unless you can change the 3rd party library(doubtful you can do that).
My solution theme would be to (It would work with JDK 9+ as a couple of overridable methods are exposed since that version)
Make the complete ecosystem aware of MDC
And for that, we need to address the following scenarios:
When all do we get new instances of CompletableFuture from within this class? → We need to return a MDC aware version of the same rather.
When all do we get new instances of CompletableFuture from outside this class? → We need to return a MDC aware version of the same rather.
Which executor is used when in CompletableFuture class? → In all circumstances, we need to make sure that all executors are MDC aware
For that, let's create a MDC aware version class of CompletableFuture by extending it. My version of that would look like below
import org.slf4j.MDC;
import java.util.Map;
import java.util.concurrent.*;
import java.util.function.Function;
import java.util.function.Supplier;
public class MDCAwareCompletableFuture<T> extends CompletableFuture<T> {
public static final ExecutorService MDC_AWARE_ASYNC_POOL = new MDCAwareForkJoinPool();
#Override
public CompletableFuture newIncompleteFuture() {
return new MDCAwareCompletableFuture();
}
#Override
public Executor defaultExecutor() {
return MDC_AWARE_ASYNC_POOL;
}
public static <T> CompletionStage<T> getMDCAwareCompletionStage(CompletableFuture<T> future) {
return new MDCAwareCompletableFuture<>()
.completeAsync(() -> null)
.thenCombineAsync(future, (aVoid, value) -> value);
}
public static <T> CompletionStage<T> getMDCHandledCompletionStage(CompletableFuture<T> future,
Function<Throwable, T> throwableFunction) {
Map<String, String> contextMap = MDC.getCopyOfContextMap();
return getMDCAwareCompletionStage(future)
.handle((value, throwable) -> {
setMDCContext(contextMap);
if (throwable != null) {
return throwableFunction.apply(throwable);
}
return value;
});
}
}
The MDCAwareForkJoinPool class would look like (have skipped the methods with ForkJoinTask parameters for simplicity)
public class MDCAwareForkJoinPool extends ForkJoinPool {
//Override constructors which you need
#Override
public <T> ForkJoinTask<T> submit(Callable<T> task) {
return super.submit(MDCUtility.wrapWithMdcContext(task));
}
#Override
public <T> ForkJoinTask<T> submit(Runnable task, T result) {
return super.submit(wrapWithMdcContext(task), result);
}
#Override
public ForkJoinTask<?> submit(Runnable task) {
return super.submit(wrapWithMdcContext(task));
}
#Override
public void execute(Runnable task) {
super.execute(wrapWithMdcContext(task));
}
}
The utility methods to wrap would be such as
public static <T> Callable<T> wrapWithMdcContext(Callable<T> task) {
//save the current MDC context
Map<String, String> contextMap = MDC.getCopyOfContextMap();
return () -> {
setMDCContext(contextMap);
try {
return task.call();
} finally {
// once the task is complete, clear MDC
MDC.clear();
}
};
}
public static Runnable wrapWithMdcContext(Runnable task) {
//save the current MDC context
Map<String, String> contextMap = MDC.getCopyOfContextMap();
return () -> {
setMDCContext(contextMap);
try {
return task.run();
} finally {
// once the task is complete, clear MDC
MDC.clear();
}
};
}
public static void setMDCContext(Map<String, String> contextMap) {
MDC.clear();
if (contextMap != null) {
MDC.setContextMap(contextMap);
}
}
Below are some guidelines for usage:
Use the class MDCAwareCompletableFuture rather than the class CompletableFuture.
A couple of methods in the class CompletableFuture instantiates the self version such as new CompletableFuture.... For such methods (most of the public static methods), use an alternative method to get an instance of MDCAwareCompletableFuture. An example of using an alternative could be rather than using CompletableFuture.supplyAsync(...), you can choose new MDCAwareCompletableFuture<>().completeAsync(...)
Convert the instance of CompletableFuture to MDCAwareCompletableFuture by using the method getMDCAwareCompletionStage when you get stuck with one because of say some external library which returns you an instance of CompletableFuture. Obviously, you can't retain the context within that library but this method would still retain the context after your code hits the application code.
While supplying an executor as a parameter, make sure that it is MDC Aware such as MDCAwareForkJoinPool. You could create MDCAwareThreadPoolExecutor by overriding execute method as well to serve your use case. You get the idea!
You can find a detailed explanation of all of the above here in a post about the same.
I have a flow that ends with sending a SOAP request. I'd like to write some kind of integration tests, for which I give 10 elements in input, and after going through the flow, I validate that 4 requests were fired for the 4 elements I expect (the 6 others got filtered and didn't make it through).
I'm using WebServiceTemplate, and I've read about MockWebServiceServer, but I am not sure it allows to do it out of the box. I'd like to maybe extend it, so that all sent requests are saved in a List that I can access to perform the assertions. I've looked at the source code, of MockWebServiceServer / MockWebServiceMessageSender but I don't see where I would do it.
Any ideas of how to achieve this ?
Thanks
One way of doing this is to extend RequestMatcher, not MockWebServiceServer. Here's an example of the class :
public class NeverFailingRequestMatcherWithMemory implements RequestMatcher {
List<WebServiceMessage> sentRequests=new ArrayList<WebServiceMessage>();
#Override
public void match(URI uri, WebServiceMessage request) throws IOException, AssertionError {
sentRequests.add(request);
}
public void clearMemory(){
sentRequests.clear();
}
public List<WebServiceMessage> getSentRequests(){
return sentRequests;
}
}
And you use it like this in your tests :
NeverFailingRequestMatcherWithMemory matcherWithMemory=new NeverFailingRequestMatcherWithMemory();
#Before
public void configureMockWsServer() {
WebServiceTemplate usedWebServiceTemplate = appCtx.getBean(WebServiceTemplate.class);
mockServer = MockWebServiceServer.createServer(usedWebServiceTemplate);
matcherWithMemory.clearMemory();
}
and later in your tests, something like :
mockServer.expect(matcherWithMemory).andRespond(withPayload(someResponsePayload));
assertThat(matcherWithMemory.getSentRequests()).hasSize(1);
Then you have access to the requests that were sent and can parse them the way you want.