ApacheConnector does not process request headers that were set in a WriterInterceptor - jersey

I am experiencing problems when configurating my Jersey Client with the ApacheConnector. It seems to ignore all request headers that I define in a WriterInterceptor. I can tell that the WriterInterceptor is called when I set a break point within WriterInterceptor#aroundWriteTo(WriterInterceptorContext). Contrary to that, I can observe that the modification of an InputStream is preserved.
Here is a runnable example demonstrating my problem:
public class ApacheConnectorProblemDemonstration extends JerseyTest {
private static final Logger LOGGER = Logger.getLogger(JerseyTest.class.getName());
private static final String QUESTION = "baz", ANSWER = "qux";
private static final String REQUEST_HEADER_NAME_CLIENT = "foo-cl", REQUEST_HEADER_VALUE_CLIENT = "bar-cl";
private static final String REQUEST_HEADER_NAME_INTERCEPTOR = "foo-ic", REQUEST_HEADER_VALUE_INTERCEPTOR = "bar-ic";
private static final int MAX_CONNECTIONS = 100;
private static final String PATH = "/";
#Path(PATH)
public static class TestResource {
#POST
public String handle(InputStream questionStream,
#HeaderParam(REQUEST_HEADER_NAME_CLIENT) String client,
#HeaderParam(REQUEST_HEADER_NAME_INTERCEPTOR) String interceptor)
throws IOException {
assertEquals(REQUEST_HEADER_VALUE_CLIENT, client);
// Here, the header that was set in the client's writer interceptor is lost.
assertEquals(REQUEST_HEADER_VALUE_INTERCEPTOR, interceptor);
// However, the input stream got gzipped so the WriterInterceptor has been partly applied.
assertEquals(QUESTION, new Scanner(new GZIPInputStream(questionStream)).nextLine());
return ANSWER;
}
}
#Provider
#Priority(Priorities.ENTITY_CODER)
public static class ClientInterceptor implements WriterInterceptor {
#Override
public void aroundWriteTo(WriterInterceptorContext context)
throws IOException, WebApplicationException {
context.getHeaders().add(REQUEST_HEADER_NAME_INTERCEPTOR, REQUEST_HEADER_VALUE_INTERCEPTOR);
context.setOutputStream(new GZIPOutputStream(context.getOutputStream()));
context.proceed();
}
}
#Override
protected Application configure() {
enable(TestProperties.LOG_TRAFFIC);
enable(TestProperties.DUMP_ENTITY);
return new ResourceConfig(TestResource.class);
}
#Override
protected Client getClient(TestContainer tc, ApplicationHandler applicationHandler) {
ClientConfig clientConfig = tc.getClientConfig() == null ? new ClientConfig() : tc.getClientConfig();
clientConfig.property(ApacheClientProperties.CONNECTION_MANAGER, makeConnectionManager(MAX_CONNECTIONS));
clientConfig.register(ClientInterceptor.class);
// If I do not use the Apache connector, I avoid this problem.
clientConfig.connector(new ApacheConnector(clientConfig));
if (isEnabled(TestProperties.LOG_TRAFFIC)) {
clientConfig.register(new LoggingFilter(LOGGER, isEnabled(TestProperties.DUMP_ENTITY)));
}
configureClient(clientConfig);
return ClientBuilder.newClient(clientConfig);
}
private static ClientConnectionManager makeConnectionManager(int maxConnections) {
PoolingClientConnectionManager connectionManager = new PoolingClientConnectionManager();
connectionManager.setMaxTotal(maxConnections);
connectionManager.setDefaultMaxPerRoute(maxConnections);
return connectionManager;
}
#Test
public void testInterceptors() throws Exception {
Response response = target(PATH)
.request()
.header(REQUEST_HEADER_NAME_CLIENT, REQUEST_HEADER_VALUE_CLIENT)
.post(Entity.text(QUESTION));
assertEquals(200, response.getStatus());
assertEquals(ANSWER, response.readEntity(String.class));
}
}
I want to use the ApacheConnector in order to optimize for concurrent requests via the PoolingClientConnectionManager. Did I mess up the configuration?
PS: The exact same problem occurs when using the GrizzlyConnector.

After further research, I assume that this is rather a misbehavior in the default Connector that uses a HttpURLConnection. As I explained in this other self-answered question of mine, the documentation states:
Whereas filters are primarily intended to manipulate request and
response parameters like HTTP headers, URIs and/or HTTP methods,
interceptors are intended to manipulate entities, via manipulating
entity input/output streams
A WriterInterceptor is not supposed to manipulate the header values while a {Client,Server}RequestFilter is not supposed to manipulate the entity stream. If you need to use both, both components should be bundled within a javax.ws.rs.core.Feature or within the same class that implements two interfaces. (This can be problematic if you need to set two different Prioritys though.)
All this is very unfortunate though, since JerseyTest uses the Connector that uses a HttpURLConnection such that all my unit tests succeeded while the real life application misbehaved since it was configured with an ApacheConnector. Also, rather than suppressing changes, I wished, Jersey would throw me some exceptions. (This is a general issue I have with Jersey. When I for example used a too new version of the ClientConnectionManager where the interface was renamed to HttpClientConnectionManager I simply was informed in a one line log statement that all my configuration efforts were ignored. I did not discover this log statement til very late in development.)

Related

Using DelegatingSessionFactory with RemoteFileTemplate.execute(SessionCallback)

I'm trying to declare multiple SFTP sessions, wrap them in a DelegatingSessionFactory, then later use SftpRemoteFileTemplate.execute(...) during a cron job.
On the execute part of things, the code is very simple, it is already used for a single session, but I want to expand it to multiple possible sessions.
Below I extended my single session code. I just copied the methods for reference. At the end I'll show how I think the new methods should look.
public class XSession extends SftpSession {
#Scheduled(cron = "${sftp.scan.x.schedule}")
void scan() {
List<FileHistoryEntity> fileList = template.execute(this::processFiles);
...
}
private List<FileHistoryEntity> processFiles(Session<ChannelSftp.LsEntry> session) {
List.of(session.list(this.remoteDir)).forEach(file -> doWhatever());
...
}
}
But now I have multiple sessions. So I declare the following class:
#Slf4j
#Configuration
#RequiredArgsConstructor
public class DelegateSftpSessionHandler {
private final SessionFactory<ChannelSftp.LsEntry> session1;
private final SessionFactory<ChannelSftp.LsEntry> session2;
private final SessionFactory<ChannelSftp.LsEntry> session3;
private final SessionFactory<ChannelSftp.LsEntry> session4;
private final SessionFactory<ChannelSftp.LsEntry> session5;
#RequiredArgsConstructor
public enum DelegateSessionConfig {
SESSION_1("IN_REALITY_A_RELEVANT_NAME_1");
SESSION_2("IN_REALITY_A_RELEVANT_NAME_2");
SESSION_3("IN_REALITY_A_RELEVANT_NAME_3");
SESSION_4("IN_REALITY_A_RELEVANT_NAME_4");
SESSION_5("IN_REALITY_A_RELEVANT_NAME_5");
public final String threadKey;
}
#Bean
#Primary
public DelegatingSessionFactory<ChannelSftp.LsEntry> delegatingSessionFactory() {
Map<Object, SessionFactory<ChannelSftp.LsEntry>> sessionMap = new HashMap<>();
sessionMap.put(DelegateSessionConfig.SESSION_1.threadKey, session1);
sessionMap.put(DelegateSessionConfig.SESSION_2.threadKey, session2);
sessionMap.put(DelegateSessionConfig.SESSION_3.threadKey, session3);
sessionMap.put(DelegateSessionConfig.SESSION_4.threadKey, session4);
sessionMap.put(DelegateSessionConfig.SESSION_5.threadKey, session5);
DefaultSessionFactoryLocator<ChannelSftp.LsEntry> sessionLocator = new DefaultSessionFactoryLocator<>(sessionMap);
return new DelegatingSessionFactory<>(sessionLocator);
}
#Bean
SftpRemoteFileTemplate ftpRemoteFileTemplate(DelegatingSessionFactory<ChannelSftp.LsEntry> dsf) {
return new SftpRemoteFileTemplate(dsf);
}
}
Ting is, I have no idea how any of this works, and the spring sftp / fpt documentation is by no means clear. The code is virtually undocumented. And I'm just guessing. I think that I have to do the following:
public class XSession extends SftpSession {
#Autowire
DelegatingSessionFactory<ChannelSftp.LsEntry> delegatingSessionFactory;
#Autowired
SftpRemoteFileTemplate template;
#Scheduled(cron = "${sftp.scan.x.schedule}") // x == SESSION_1
#Async // for thread key
void scan() {
delegatingSessionFactory.setThreadKey(DelegateSessionConfig.SESSION_1.threadKey);
// because thread key changes the session globally? So I don't need specify
// which session this template is working with???
List<FileHistoryEntity> fileList = template.execute(this::processFiles);
...
delegatingSessionFactory.clearThreadKey();
}
private List<FileHistoryEntity> processFiles(Session<ChannelSftp.LsEntry> session) {
List.of(session.list(this.remoteDir)).forEach(file -> doWhatever());
...
}
}
I'm basing what I'm saying on the following link, github spring integration test
Honestly, I hardly understand what is happening. But it seems like setting the thread key, changes the session globally.
My only other idea is to just ... create the RemoteFileTemplate on demand
public static SftpRemoteFileTemplate getTemplateFor(DelegatingSessionFactory<ChannelSftp.LsEntry> dsf, DelegateSessionConfig session) {
return new SftpRemoteFileTemplate(dsf.getFactoryLocator().getSessionFactory(session.threadKey));
}
It does not set it globally. That's how a ThreadLocal variable works: you set a value in some thread and only this thread can see it. If you use the same object concurrently, other threads don't see that value because it does not belong to their thread state.
Not sure what is your concern, but pattern to extend an SftpSession for custom logic is not right. You should consider to use an SftpRemoteFileTemplate.execute(SessionCallback<F, T> callback) instead, but thread key must be set into a DelegatingSessionFactory before anyway and in the same thread you going to call that execute().

How to handle Access Denied properly in Vaadin 14 LTS

I started implementing authentication and authorization for our applications written in Spring Boot (2.2.6.RELEASE) and Vaadin 14 LTS (14.6.1).
I have followed those resources:
Securing your app with Spring Security
Router Exception Handling
I have code for checking whether logged-in user has access rights to specified resources implemented in beforeEnter method. The problem is with invocation of event.rerouteToError(AccessDeniedException.class);. It tries to create an instance of the specified exception with reflection but fails because it does not contain public no-arg constructor.
private void beforeEnter(final BeforeEnterEvent event) {
if (!AuthView.class.equals(event.getNavigationTarget()) && !AuthUtils.isUserLoggedIn()) {
event.rerouteTo(AuthView.class);
}
if (!AuthUtils.isAccessGranted(event.getNavigationTarget())) {
event.rerouteToError(AccessDeniedException.class);
}
}
java.lang.IllegalArgumentException: Unable to create an instance of 'org.springframework.security.access.AccessDeniedException'. Make sure the class has a public no-arg constructor.
at com.vaadin.flow.internal.ReflectTools.createProxyInstance(ReflectTools.java:519)
at com.vaadin.flow.internal.ReflectTools.createInstance(ReflectTools.java:451)
at com.vaadin.flow.router.BeforeEvent.rerouteToError(BeforeEvent.java:720)
at com.vaadin.flow.router.BeforeEvent.rerouteToError(BeforeEvent.java:704)
What can be the best solution for that case? I am thinking about two possible solutions:
First instantiate AccessDeniedException and then pass it to overloaded method in BeforeEvent: public void rerouteToError(Exception exception, String customMessage) which should skip creating exception object by reflection
Create dedicated ErrorView and use method public void rerouteTo(Class<? extends Component> routeTargetType, RouteParameters parameters) of BeforeEvent
I decided to follow Leif Åstrand's answer. I created custom AccessDeniedException and appropriate error handler. Here is my implementation. Maybe it will be helpful for someone.
public class AccessDeniedException extends RuntimeException {
private final int code;
public AccessDeniedException() {
super("common.error.403.details");
this.code = HttpServletResponse.SC_FORBIDDEN;
}
public int getCode() {
return code;
}
}
#Tag(Tag.DIV)
#CssImport(value = "./styles/access-denied-view.css")
#CssImport(value = "./styles/access-denied-box.css", themeFor = "vaadin-details")
public class AccessDeniedExceptionHandler extends VerticalLayout implements HasErrorParameter<AccessDeniedException> {
private final Details details;
public AccessDeniedExceptionHandler() {
setWidthFull();
setHeight("100vh");
setPadding(false);
setDefaultHorizontalComponentAlignment(Alignment.CENTER);
setJustifyContentMode(JustifyContentMode.CENTER);
setClassName(ComponentConstants.ACCESS_DENIED_VIEW);
this.details = new Details();
this.details.setClassName(ComponentConstants.ACCESS_DENIED_BOX);
this.details.addThemeVariants(DetailsVariant.REVERSE, DetailsVariant.FILLED);
this.details.setOpened(true);
add(this.details);
}
#Override
public final int setErrorParameter(final BeforeEnterEvent event, final ErrorParameter<AccessDeniedException> parameter) {
final int code = parameter.getException().getCode();
this.details.setSummaryText(getTranslation("common.error.403.header", code));
this.details.setContent(new Text(getTranslation(parameter.getException().getMessage())));
return code;
}
}
I would recommend creating a custom exception type instead of reusing AccessDeniedException from Spring. In that way, you don't have to deal with the required error message at all.
As you mentioned in your first solution, you could do:
event.rerouteToError(new AccessDeniedException("Navigation target not permitted"), "");
or maybe also specify the customMessage if you want. If you see the implementation of the rerouteToError(Class) method, it just passes empty customMessage and creates the Exception - which you could do manually and that's completely acceptable. I recommend this solution.
Another solution could be to subclass AccessDeniedException and use that with reflection:
public class RouteAccessDeniedException extends AccessDeniedException {
public RouteAccessDeniedException() {
super("Navigation target not permitted");
}
}
I don't recommend this solution.

Logging with XQuery

I'm using XQuery 3.0 to transform an incoming message to fit my system.
The XQuery is called from an Apache Camel Route via the transform EIP.
Example:
transform().xquery("resource:classpath:xquery/myxquery.xquery",String.class)
While the transformation works without problems it would be nice, since it's partly very complex, to be able to log some informations directly during the transformation process.
So I wanted to ask if it is possible to log "into" logback directly from XQuery?
I already searched stackoverflow and of course https://www.w3.org/TR/xquery-30-use-cases/ and other sources, but I just couldn't find any information about how to log in Xquery.
My project structure is:
Spring-Boot 2 application
Apache-Camel as Routing framework
Logback as Logging framework
Update: For the integration of XQuery in the Apache-Camel Framework I use the org.apache.camel:camel-saxon-starter:2.22.2.
Update: Because the use of fn:trace was kind of ugly I searched further and now I use the extension mechanism from Saxon to provide different logging functions which can be accessed via xquery:
For more information see the documentation: http://www.saxonica.com/documentation/#!extensibility/integratedfunctions/ext-full-J
Here is what I did for logging (tested with Saxon-HE, Camel is not mandatory, I just use it by coincidence):
First step:
Extend the class net.sf.saxon.lib.ExtensionFunctionDefinition
public class XQueryInfoLogFunctionDefinition extends ExtensionFunctionDefinition{
private static final Logger log = LoggerFactory.getLogger(XQueryInfoLogFunctionDefinition.class);
private final XQueryInfoExtensionFunctionCall functionCall = new XQueryInfoExtensionFunctionCall();
private static final String PREFIX = "log";
#Override
public StructuredQName getFunctionQName() {
return new StructuredQName(PREFIX, "http://thehandofnod.com/saxon-extension", "info");
}
#Override
public SequenceType[] getArgumentTypes() {
return new SequenceType[] { SequenceType.SINGLE_STRING };
}
#Override
public SequenceType getResultType(SequenceType[] suppliedArgumentTypes) {
return SequenceType.VOID;
}
#Override
public ExtensionFunctionCall makeCallExpression() {
return functionCall;
}
}
Second step:
Implement the FunctionCall class
public class XQueryInfoExtensionFunctionCall extends ExtensionFunctionCall {
private static final Logger log = LoggerFactory.getLogger(XQueryInfoLogFunctionDefinition.class);
#Override
public Sequence call(XPathContext context, Sequence[] arguments) throws XPathException {
if (arguments != null && arguments.length > 0) {
log.info(((StringValue) arguments[0]).getStringValue());
} else
throw new IllegalArgumentException("We need a message");
return EmptySequence.getInstance();
}
}
Third step:
Configure the SaxonConfiguration and bind it into the camel context:
public static void main(String... args) throws Exception {
Main main = new Main();
Configuration saxonConfig = Configuration.newConfiguration();
saxonConfig.registerExtensionFunction(new XQueryInfoLogFunctionDefinition());
main.bind("saxonConfig", saxonConfig);
main.addRouteBuilder(new MyRouteBuilder());
main.run(args);
}
Fourth step:
Define the SaxonConfig in your XQueryEndpoint:
.to("xquery:test.xquery?configuration=#saxonConfig");
Fifth step:
Call it in your xquery:
declare namespace log="http://thehandofnod.com/saxon-extension";
log:info("Das ist ein INFO test")
Original post a.k.a How to overwrite the fn:trace Funktion:
Thanks to Martin Honnen I tried the fn:trace function. Problem was that by default it logs into the System.err Printstream and that's not what I wanted, because I wanted to combine the fn:trace function with the Logback Logging-Framework.
So I debugged the net.sf.saxon.functions.Trace methods and came to the following solution for my project setup.
Write a custom TraceListener which extends from net.sf.saxon.trace.XQueryTraceListener and implement the methods enter and leave in a way that the InstructionInfo with constructType == 2041 (for user-trace) is forwarded to the SLF4J-API. Example (for only logging the message):
#Override
public void enter(InstructionInfo info, XPathContext context) {
// no call to super to keep it simple.
String nachricht = (String) info.getProperty("label");
if (info.getConstructType() == 2041 && StringUtils.hasText(nachricht)) {
getLogger().info(nachricht);
}
}
#Override
public void leave(InstructionInfo info) {
// no call to super to keep it simple.
}
set the custom trace listener into your net.sf.saxon.Configuration Bean via setTraceListener
Call your xquery file from camel via the XQueryEndpoint because only there it is possible to overwrite the Configuration with an option: .to("xquery:/xquery/myxquery.xquery?configuration=#saxonConf"). Unfortunately the transform().xquery(...) uses it's own objects without the possibility to configure them.
call {fn:trace($element/text(),"Das ist ein Tracing Test")} in your xquery and see the message in your log.

Add camel route at runtime using end points configured in property file

I own a spring application and want to add camel routes dynamically during my application startup.End points are configured in property file and are loaded at run time.
Using Java DSL, i am using for loop to create all routes,
for(int i=0;i<allEndPoints;i++)
{
DynamcRouteBuilder route = new
DynamcRouteBuilder(context,fromUri,toUri)
camelContext.addRoutes(route)
}
private class DynamcRouteBuilder extends RouteBuilder {
private final String from;
private final String to;
private MyDynamcRouteBuilder(CamelContext context, String from, String to) {
super(context);
this.from = from;
this.to = to;
}
#Override
public void configure() throws Exception {
from(from).to(to);
}
}
but getting below exception while creating first route itself
Failed to create route file_routedirect: at: >>> OnException[[class org.apache.camel.component.file.GenericFileOperationFailedException] -> [Log[Exception trapped ${exception.class}], process[Processor#0x0]]] <<< in route: Route(file_routedirect:)[[From[direct:... because of ref must be specified on: process[Processor#0x0]\n\ta
Not sure about it- what is the issue ? Can someone has any suggestion or fix for this. Thanks
Well, to create routes in an iteration it is nice to have some object that holds the different values for one route. Let's call this RouteConfiguration, a simple POJO with String fields for from, to and routeId.
We are using YAML files to configure such things because you have a real List format instead of using "flat lists" in property files (route[0].from, route[0].to).
If you use Spring you can directly transform such a "list of object configurations" into a Collection of objects using #ConfigurationProperties
When you are able to create such a Collection of value objects, you can simply iterate over it. Here is a strongly simplified example.
#Override
public void configure() {
createConfiguredRoutes();
}
void createConfiguredRoutes() {
configuration.getRoutes().forEach(this::addRouteToContext);
}
// Implement route that is added in an iteration
private void addRouteToContext(final RouteConfiguration routeConfiguration) throws Exception {
this.camelContext.addRoutes(new RouteBuilder() {
#Override
public void configure() throws Exception {
from(routeConfiguration.getFrom())
.routeId(routeConfiguration.getRouteId())
...
.to(routeConfiguration.getTo());
}
});
}

Reading an OSGi config value

I've got some code like this to read a value that could be set either with a sling:OsgiConfig node or after being set in the Felix UI...
#Component(immediate = true, metatype = true, label = "Dummy Service")
public class DummyService {
#Property(label = "Dummy Service Value")
public static final String DUMMY_VALUE = "dummyValue";
private static String m_strDummyValue = "default value";
public static String getDummyValue(){
return m_strDummyValue;
}
#Activate
protected void activate(ComponentContext context) {
configure(context.getProperties());
}
#Deactivate
protected void deactivate(ComponentContext context) {
}
#Modified
protected void modified(ComponentContext componentContext) {
configure(componentContext.getProperties());
}
public void updated(Dictionary properties) throws ConfigurationException {
configure(properties);
}
private void configure(Dictionary properties) {
m_strDummyValue = OsgiUtil.toString(properties.get(DUMMY_VALUE), null);
}
}
And could be called in any consuming class as
DummyService.getDummyValue();
This is currently working in our development environment. It's also very similar to some code that another vendor wrote and is currently in production in the client environment, and seems to be working. However, I ran across this post OSGi component configurable via Apache Felix... which recommends against using a static accessor like this. Are there potential problems where getDummyValue() could return an incorrect value, or is the recommendation more about being philosophically consistent with OSGi's patterns?
Generally statics are frowned upon especially in OSGi as it involves a tight code coupling. It would be better to have DummySerivce be an interface and your class implement it with the component being a service. Then others would reference your component's service. Once injected with the service, they can call the service's methods.
You shouldn't do this for one major reason: there is no guarantee that DummyService has been configured when you access the static method - in contrast with a service reference.

Resources