Spring Feign Not Compressing Response - spring

I am using spring feign to compress request and response
On Server Side:
server:
servlet:
context-path: /api/v1/
compression:
enabled: true
min-response-size: 1024
When I hit the api from chrome, I see that it adds 'Accept-Encoding': "gzip, deflate, br"
On Client Side:
server:
port: 8192
servlet:
context-path: /api/demo
feign.compression.response.enabled: true
feign.client.config.default.loggerLevel: HEADERS
logging.level.com.example.feigndemo.ManagementApiService: DEBUG
eureka:
client:
enabled: false
management-api:
ribbon:
listOfServers: localhost:8080
When I see the request headers passed, feign is passing two headers.
Accept-Encoding: deflate
Accept-Encoding: gzip
gradle file
plugins {
id 'org.springframework.boot' version '2.1.8.RELEASE'
id 'io.spring.dependency-management' version '1.0.8.RELEASE'
id 'java'
}
group = 'com.example'
version = '0.0.1-SNAPSHOT'
sourceCompatibility = '1.8'
configurations {
compileOnly {
extendsFrom annotationProcessor
}
}
repositories {
mavenCentral()
}
ext {
set('springCloudVersion', "Greenwich.SR2")
}
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-web'
compile ('org.springframework.cloud:spring-cloud-starter-netflix-ribbon')
compile('org.springframework.cloud:spring-cloud-starter-openfeign')
// https://mvnrepository.com/artifact/io.github.openfeign/feign-httpclient
// https://mvnrepository.com/artifact/io.github.openfeign/feign-httpclient
//compile group: 'io.github.openfeign', name: 'feign-httpclient', version: '9.5.0'
compileOnly 'org.projectlombok:lombok'
annotationProcessor 'org.projectlombok:lombok'
testImplementation 'org.springframework.boot:spring-boot-starter-test'
}
dependencyManagement {
imports {
mavenBom "org.springframework.cloud:spring-cloud-dependencies:${springCloudVersion}"
}
}
The response is not compressed. What I have seen is that Spring feign is sending the "Accept-Encoding" as two different values
Let me know if thing is wrong here

I have faced the same issue a couple of weeks back and I came to know that there is no fruitful/straight forward way of doing it. I have also got to know that when #patan reported the issue with the spring community #patan reported issue1 and #patan reported issue2 there was a ticket created for the tomcat side to attempt to fix the issue (issue link). There has been also a ticket (ticket link) present in the Jetty side related to the same. Initially, I planned to use the approach suggested in github but later came to know that the library had been already merged into spring-cloud-openfeign-core jar under org.springframework.cloud.openfeign.encoding package. Nevertheless, we could not achieve compression as expected and faced the following two challenges:
When we enable the feign compression by settings the org.springframework.cloud.openfeign.encoding.FeignAcceptGzipEncodingInterceptor (code-link) class adds the Accept-Encoding header with values as gzip and deflate but due to the issue (ticket) the tomcat server could not interpret it as a sign of compression signal. As a solution, we have to add the manual Feign interpreter to override the
FeignAcceptGzipEncodingInterceptor functionality and concatenate the headers.
The default compression settings for Feign perfectly work in the most simple scenarios but when there is a situation when Client calling microservice and that microservice calling another microservice through feign then the feign cannot handle the compressed response because Spring cloud open feign decoder does not decompress response by default (default spring open feign decoder) which eventually ends with the issue (issue link). So we have to write our own decoder to achieve decompression.
I have finally found a solution based on various available resources so just follow the steps for the spring feign compression:
application.yml
spring:
http:
encoding:
enabled: true
#to enable server side compression
server:
compression:
enabled: true
mime-types:
- application/json
min-response-size: 2048
#to enable feign side request/response compression
feign:
httpclient:
enabled: true
compression:
request:
enabled: true
mime-types:
- application/json
min-request-size: 2048
response:
enabled: true
NOTE: The above feign configuration my default enables compression to all feign clients.
CustomFeignDecoder
import feign.Response;
import feign.Util;
import feign.codec.Decoder;
import org.springframework.cloud.openfeign.encoding.HttpEncoding;
import java.io.BufferedReader;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStreamReader;
import java.lang.reflect.Type;
import java.nio.charset.StandardCharsets;
import java.util.Collection;
import java.util.Objects;
import java.util.zip.GZIPInputStream;
public class CustomGZIPResponseDecoder implements Decoder {
final Decoder delegate;
public CustomGZIPResponseDecoder(Decoder delegate) {
Objects.requireNonNull(delegate, "Decoder must not be null. ");
this.delegate = delegate;
}
#Override
public Object decode(Response response, Type type) throws IOException {
Collection<String> values = response.headers().get(HttpEncoding.CONTENT_ENCODING_HEADER);
if(Objects.nonNull(values) && !values.isEmpty() && values.contains(HttpEncoding.GZIP_ENCODING)){
byte[] compressed = Util.toByteArray(response.body().asInputStream());
if ((compressed == null) || (compressed.length == 0)) {
return delegate.decode(response, type);
}
//decompression part
//after decompress we are delegating the decompressed response to default
//decoder
if (isCompressed(compressed)) {
final StringBuilder output = new StringBuilder();
final GZIPInputStream gis = new GZIPInputStream(new ByteArrayInputStream(compressed));
final BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(gis, StandardCharsets.UTF_8));
String line;
while ((line = bufferedReader.readLine()) != null) {
output.append(line);
}
Response uncompressedResponse = response.toBuilder().body(output.toString().getBytes()).build();
return delegate.decode(uncompressedResponse, type);
}else{
return delegate.decode(response, type);
}
}else{
return delegate.decode(response, type);
}
}
private static boolean isCompressed(final byte[] compressed) {
return (compressed[0] == (byte) (GZIPInputStream.GZIP_MAGIC)) && (compressed[1] == (byte) (GZIPInputStream.GZIP_MAGIC >> 8));
}
}
FeignCustomConfiguration
import feign.RequestInterceptor;
import feign.RequestTemplate;
import feign.optionals.OptionalDecoder;
import org.springframework.beans.factory.ObjectFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.autoconfigure.http.HttpMessageConverters;
import org.springframework.cloud.openfeign.support.ResponseEntityDecoder;
import org.springframework.cloud.openfeign.support.SpringDecoder;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
#Configuration
public class CustomFeignConfiguration {
#Autowired
private ObjectFactory<HttpMessageConverters> messageConverters;
//concatenating headers because of https://github.com/spring-projects/spring-boot/issues/18176
#Bean
public RequestInterceptor gzipInterceptor() {
return new RequestInterceptor() {
#Override
public void apply(RequestTemplate template) {
template.header("Accept-Encoding", "gzip, deflate");
}
};
}
#Bean
public CustomGZIPResponseDecoder customGZIPResponseDecoder() {
OptionalDecoder feignDecoder = new OptionalDecoder(new ResponseEntityDecoder(new SpringDecoder(this.messageConverters)));
return new CustomGZIPResponseDecoder(feignDecoder);
}
}
Additional tips
if you are planning to build the CustomDecoder with just feign-core libraries
import com.fasterxml.jackson.databind.DeserializationFeature;
import com.fasterxml.jackson.databind.JavaType;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.type.TypeFactory;
import feign.Response;
import feign.Util;
import feign.codec.DecodeException;
import feign.codec.Decoder;
import org.springframework.http.HttpEntity;
import org.springframework.http.HttpHeaders;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.http.client.ClientHttpResponse;
import org.springframework.http.converter.json.MappingJackson2HttpMessageConverter;
import org.springframework.util.LinkedMultiValueMap;
import org.springframework.util.MultiValueMap;
import org.springframework.util.StringUtils;
import org.springframework.web.client.HttpMessageConverterExtractor;
import java.io.BufferedReader;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.lang.reflect.ParameterizedType;
import java.lang.reflect.Type;
import java.lang.reflect.WildcardType;
import java.nio.charset.StandardCharsets;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Collections;
import java.util.LinkedList;
import java.util.Map;
import java.util.Objects;
import java.util.zip.GZIPInputStream;
import static java.util.zip.GZIPInputStream.GZIP_MAGIC;
public class CustomGZIPResponseDecoder implements Decoder {
private final Decoder delegate;
public CustomGZIPResponseDecoder(Decoder delegate) {
Objects.requireNonNull(delegate, "Decoder must not be null. ");
this.delegate = delegate;
}
#Override
public Object decode(Response response, Type type) throws IOException {
Collection<String> values = response.headers().get("Content-Encoding");
if (Objects.nonNull(values) && !values.isEmpty() && values.contains("gzip")) {
byte[] compressed = Util.toByteArray(response.body().asInputStream());
if ((compressed == null) || (compressed.length == 0)) {
return delegate.decode(response, type);
}
if (isCompressed(compressed)) {
Response uncompressedResponse = getDecompressedResponse(response, compressed);
return getObject(type, uncompressedResponse);
} else {
return getObject(type, response);
}
} else {
return getObject(type, response);
}
}
private Object getObject(Type type, Response response) throws IOException {
ObjectMapper mapper = new ObjectMapper();
mapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
if (response.status() == 404 || response.status() == 204)
return Util.emptyValueOf(type);
if (Objects.isNull(response.body()))
return null;
if (byte[].class.equals(type))
return Util.toByteArray(response.body().asInputStream());
if (isParameterizeHttpEntity(type)) {
type = ((ParameterizedType) type).getActualTypeArguments()[0];
if (type instanceof Class || type instanceof ParameterizedType
|| type instanceof WildcardType) {
#SuppressWarnings({"unchecked", "rawtypes"})
HttpMessageConverterExtractor<?> extractor = new HttpMessageConverterExtractor(
type, Collections.singletonList(new MappingJackson2HttpMessageConverter(mapper)));
Object decodedObject = extractor.extractData(new FeignResponseAdapter(response));
return createResponse(decodedObject, response);
}
throw new DecodeException(HttpStatus.INTERNAL_SERVER_ERROR.value(),
"type is not an instance of Class or ParameterizedType: " + type);
} else if (isHttpEntity(type)) {
return delegate.decode(response, type);
} else if (String.class.equals(type)) {
String responseValue = Util.toString(response.body().asReader());
return StringUtils.isEmpty(responseValue) ? Util.emptyValueOf(type) : responseValue;
} else {
String s = Util.toString(response.body().asReader());
JavaType javaType = TypeFactory.defaultInstance().constructType(type);
return !StringUtils.isEmpty(s) ? mapper.readValue(s, javaType) : Util.emptyValueOf(type);
}
}
public static boolean isCompressed(final byte[] compressed) {
return (compressed[0] == (byte) (GZIP_MAGIC)) && (compressed[1] == (byte) (GZIP_MAGIC >> 8));
}
public static Response getDecompressedResponse(Response response, byte[] compressed) throws IOException {
final StringBuilder output = new StringBuilder();
final GZIPInputStream gis = new GZIPInputStream(new ByteArrayInputStream(compressed));
final BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(gis, StandardCharsets.UTF_8));
String line;
while ((line = bufferedReader.readLine()) != null) {
output.append(line);
}
return response.toBuilder().body(output.toString().getBytes()).build();
}
public static String getDecompressedResponseAsString(byte[] compressed) throws IOException {
final StringBuilder output = new StringBuilder();
final GZIPInputStream gis = new GZIPInputStream(new ByteArrayInputStream(compressed));
final BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(gis, StandardCharsets.UTF_8));
String line;
while ((line = bufferedReader.readLine()) != null) {
output.append(line);
}
return output.toString();
}
private boolean isParameterizeHttpEntity(Type type) {
if (type instanceof ParameterizedType) {
return isHttpEntity(((ParameterizedType) type).getRawType());
}
return false;
}
private boolean isHttpEntity(Type type) {
if (type instanceof Class) {
Class c = (Class) type;
return HttpEntity.class.isAssignableFrom(c);
}
return false;
}
private <T> ResponseEntity<T> createResponse(Object instance, Response response) {
MultiValueMap<String, String> headers = new LinkedMultiValueMap<>();
for (String key : response.headers().keySet()) {
headers.put(key, new LinkedList<>(response.headers().get(key)));
}
return new ResponseEntity<>((T) instance, headers, HttpStatus.valueOf(response
.status()));
}
private class FeignResponseAdapter implements ClientHttpResponse {
private final Response response;
private FeignResponseAdapter(Response response) {
this.response = response;
}
#Override
public HttpStatus getStatusCode() throws IOException {
return HttpStatus.valueOf(this.response.status());
}
#Override
public int getRawStatusCode() throws IOException {
return this.response.status();
}
#Override
public String getStatusText() throws IOException {
return this.response.reason();
}
#Override
public void close() {
try {
this.response.body().close();
} catch (IOException ex) {
// Ignore exception on close...
}
}
#Override
public InputStream getBody() throws IOException {
return this.response.body().asInputStream();
}
#Override
public HttpHeaders getHeaders() {
return getHttpHeaders(this.response.headers());
}
private HttpHeaders getHttpHeaders(Map<String, Collection<String>> headers) {
HttpHeaders httpHeaders = new HttpHeaders();
for (Map.Entry<String, Collection<String>> entry : headers.entrySet()) {
httpHeaders.put(entry.getKey(), new ArrayList<>(entry.getValue()));
}
return httpHeaders;
}
}
}
and if you are planning to build your own Feign builder then you can configure like below
Feign.builder().decoder(new CustomGZIPResponseDecoder(new feign.optionals.OptionalDecoder(new feign.codec.StringDecoder())))
.target(SomeFeignClient.class, "someurl");
Update to the above answer:
If you are planning to update the dependency version of spring-cloud-openfeign-core to 'org.springframework.cloud:spring-cloud-openfeign-core:2.2.5.RELEASE' then aware of the following Change in FeignContentGzipEncodingAutoConfiguration class.
In FeignContentGzipEncodingAutoConfiguration class the Signature of the ConditionalOnProperty annotation changed from
#ConditionalOnProperty("feign.compression.request.enabled", matchIfMissing = false) to #ConditionalOnProperty(value = "feign.compression.request.enabled"), so by default FeignContentGzipEncodingInterceptor bean will be injected into spring container if you have application property feign.request.compression=true in your environment and compress request body if default/configured size limit exceeds. This results a problem if your server don't have a mechanism to handle the compressed request, in such cases add/modify the property as feign.request.compression=false

This is actually an exception in Tomcat and Jetty - multiple encoding headers as given above is legal and should work, however Tomcat and Jetty have a bug that is preventing them to both be read.
The bug has been reported in the spring boot github here.
And in tomcat here for reference.
In Tomcat the issue is fixed in 9.0.25 so if you can update to that, that can solve it. Failing that, here is a workaround you can do to fix it:
You will need to create your own request interceptor to reconcile your gzip, deflate headers into a single header.
This interceptor needs to be added to the FeignClient configuration, and that configuration added to your feign client.
import feign.RequestInterceptor;
import feign.RequestTemplate;
import feign.template.HeaderTemplate;
import java.lang.reflect.Field;
import java.util.Collection;
import java.util.Collections;
import java.util.Map;
import lombok.extern.slf4j.Slf4j;
/**
* This is a workaround interceptor based on a known bug in Tomcat and Jetty where
* the requests are unable to perform gzip compression if the headers are in collection format.
* This is fixed in tomcat 9.0.25 - once we reach this version we can remove this class
*/
#Slf4j
public class GzipRequestInterceptor implements RequestInterceptor {
#Override
public void apply(RequestTemplate template) {
// don't add encoding to all requests - only to the ones with the incorrect header format
if (requestHasDualEncodingHeaders(template)) {
replaceTemplateHeader(template, "Accept-Encoding", Collections.singletonList("gzip,deflate"));
}
}
private boolean requestHasDualEncodingHeaders(RequestTemplate template) {
return template.headers().get("Accept-Encoding").contains("deflate")
&& template.headers().get("Accept-Encoding").contains("gzip");
}
/** Because request template is immutable, we have to do some workarounds to get to the headers */
private void replaceTemplateHeader(RequestTemplate template, String key, Collection<String> value) {
try {
Field headerField = RequestTemplate.class.getDeclaredField("headers");
headerField.setAccessible(true);
((Map)headerField.get(template)).remove(key);
HeaderTemplate newEncodingHeaderTemplate = HeaderTemplate.create(key, value);
((Map)headerField.get(template)).put(key, newEncodingHeaderTemplate);
} catch (NoSuchFieldException e) {
LOGGER.error("exception when trying to access the field [headers] via reflection");
} catch (IllegalAccessException e) {
LOGGER.error("exception when trying to get properties from the template headers");
}
}
}
I know the above looks overkill, but because the template headers are unmodifiable, we just use a bit of reflection to modify them to how we want.
Add the above interceptor to your configuration bean
import feign.RequestInterceptor;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
#Configuration
public class FeignGzipEncodingConfiguration {
#Bean
public RequestInterceptor gzipRequestInterceptor() {
return new GzipRequestInterceptor();
}
}
You can finally add this to your feign client with the configuration annotation parameter
#FeignClient(name = "feign-client", configuration = FeignGzipEncodingConfiguration.class)
public interface FeignClient {
...
}
The request interceptor should now be hit when you send a feign-client request for gzipped information. This will wipe the dual header, and write in an acceptable string concatenated one in the form of gzip,deflate

If you are using latest spring boot version then it provides default Gzip decoder so no need to write your custom decoder. Use the below property instead:-
feign:
compression:
response:
enabled: true
useGzipDecoder: true

Related

Spring Zuul Header based routing hangs when downstream is not responsive for some time

I have implemented the below code to route requests based on headers to the respective downstreams.
When the downstream are unresponsive for sometime Zuul stops forwarding the requests, it does not forward the request when the downstream is up again.
There are no errors in the logs.
package com.uk.proxy.filter;
import ai.cuddle.sift.dataapi.proxy.Application;
import com.netflix.zuul.ZuulFilter;
import com.netflix.zuul.context.RequestContext;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.cloud.netflix.zuul.filters.support.FilterConstants;
import org.springframework.stereotype.Component;
import javax.servlet.http.HttpServletRequest;
import java.net.MalformedURLException;
import java.net.URL;
import java.util.Map;
import static org.springframework.cloud.netflix.zuul.filters.support.FilterConstants.ROUTE_TYPE;
#Component
public class RoutingFilter extends ZuulFilter {
private static final Logger LOGGER = LoggerFactory.getLogger(Application.class);
private static final String DEFAULT_DOWNSTREAM_GROUP = "SYSTEM";
public static final String HEADER_ORIGIN = "";
#Autowired
#Qualifier(value = "downstreamConfig")
private Map<String, String> downstreamMap;
#Override
public String filterType() {
return ROUTE_TYPE;
}
#Override
public int filterOrder() {
return FilterConstants.PRE_DECORATION_FILTER_ORDER - 100;
}
#Override
public boolean shouldFilter() {
return true;
}
#Override
public Object run() {
RequestContext ctx = RequestContext.getCurrentContext();
HttpServletRequest request = ctx.getRequest();
String inputURI = request.getRequestURI();
String header = request.getHeader(HEADER_ORIGIN);
try {
String downstreamHost = getDownstreamHost(header);
LOGGER.info(" Header "+header+ " Downstream Host "+downstreamHost);
ctx.setRouteHost(new URL(downstreamHost));
ctx.put("requestURI", inputURI);
} catch (MalformedURLException e) {
LOGGER.error(e.getMessage(), e);
}
return null;
}
private String getDownstreamHost(String header) {
if(header == null || header.isEmpty()){
LOGGER.warn("Header is null or empty");
}
String downstreamGroup = (header == null || header.isEmpty()) ? DEFAULT_DOWNSTREAM_GROUP : header.toUpperCase();
String downstreamHost;
if (downstreamMap.containsKey(downstreamGroup)) {
downstreamHost = downstreamMap.get(downstreamGroup);
} else {
LOGGER.error("Header "+header+" not found in config. DownstreamMap "+downstreamMap);
downstreamHost = downstreamMap.get(DEFAULT_DOWNSTREAM_GROUP);
}
return downstreamHost;
}
}
application.yaml
zuul:
ignoredPatterns:
- /manage/**
routes:
yourService:
path: /**
stripPrefix: false
serviceId: customServiceId
host:
connect-timeout-millis: 300000
socket-timeout-millis: 300000
ribbon:
eureka:
enabled: false
Spring cloud version: Hoxton.SR8.
Please let me know if anyone has faced this issue.

I have to call a microservice from a batch launched from another microservice using spring-batch and openfeign

I don't know if it's possible, but this is my question:
I hava a batch developed using spring-boot and spring-batch, and I have to call another microservice using Feign...
...help!
this is my class Reader
package it.batch.step;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.batch.item.ItemReader;
import org.springframework.batch.item.NonTransientResourceException;
import org.springframework.batch.item.ParseException;
import org.springframework.batch.item.UnexpectedInputException;
import org.springframework.beans.factory.annotation.Autowired;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import it.client.feign.EmailAccountClient;
import it.dto.feign.mail.account.AccountOutDto;
import it.dto.feign.mail.account.SearchAccountFilterDto;
import it.dto.feign.mail.account.SearchAccountResponseDto;
public class Reader implements ItemReader <String> {
private static final Logger LOGGER = LoggerFactory.getLogger(Reader.class);
private int count = 0;
#Autowired
private EmailAccountClient emailAccountClient;
#Override
public String read() throws Exception, UnexpectedInputException, ParseException, NonTransientResourceException {
LOGGER.info("read - begin ");
SearchAccountResponseDto clientResponse = emailAccountClient.searchAccount(getFilter());
if (count < clientResponse.getAccounts().size()) {
return convertToJsonString(clientResponse.getAccounts().get(count++));
} else {
count = 0;
}
return null;
}
private static SearchAccountFilterDto getFilter() {
SearchAccountFilterDto filter = new SearchAccountFilterDto();
return filter;
}
private String convertToJsonString(AccountOutDto account) {
ObjectMapper mapper = new ObjectMapper();
String jsonString = "";
try {
jsonString = mapper.writeValueAsString(account);
} catch (JsonProcessingException e) {
e.printStackTrace();
}
LOGGER.info("Contenuto JSON: " + jsonString);
return jsonString;
}
}
...
when I launch the batch I have this error:
java.lang.NullPointerException: null
at it.batch.step.Reader.read(Reader.java:32) ~[classes/:?]
where line 32 is:
SearchAccountResponseDto clientResponse = emailAccountClient.searchAccount(getFilter());
EmailAccountClient is null
Your client is null: your reader is not a #Component so Spring can't autowire the client. You must use a workaround like passing the autowired client through the constructor when you instantiate the reader, like this:
private EmailAccountClient client;
public reader(EmailAccountClient client){
this.client=client;
}
in the other class:
#Autowired
private EmailAccountClient client;
#Bean
public ItemReader<String> reader(){
return new Reader(client)
}

Is there a way to automatically propagate an incoming HTTP header in a JAX-RS request to an outgoing JAX-RS request?

I'm looking for the proper way—in a Jersey application—to read a header from an incoming request and automatically install it in any outgoing requests that might be made by a JAX-RS client that my application is using.
Ideally I'd like to do this without polluting any of my classes' inner logic at all, so via various filters and interceptors.
For simple use cases, I can do this: I have a ClientRequestFilter implementation that I register on my ClientBuilder, and that filter implementation has:
#Context
private HttpHeaders headers;
...which is a context-sensitive proxy (by definition), so in its filter method it can refer to headers that were present on the inbound request that's driving all this, and install them on the outgoing request. For straightforward cases, this appears to work OK.
However, this fails in the case of asynchronicity: if I use the JAX-RS asynchronous client APIs to spawn a bunch of GETs, the filter is still invoked, but can no longer invoke methods on that headers instance variable; Jersey complains that as far as it knows we're no longer in request scope. This makes sense if request scope is defined to be per-thread: the spawned GETs are running in some Jersey-managed thread pool somewhere, not on the same thread as the one with which the headers proxy is associated, so that proxy throws IllegalStateExceptions all over the place when my filter tries to talk to it.
I feel like there's some combination of ContainerRequestFilter and ClientRequestFilter that should be able to get the job done even in asynchronous cases, but I'm not seeing it.
What I would do is make a WebTarget injectable that is preconfigured with a ClientRequestFilter to add the headers. It's better to configure the WebTarget this way, as opposed to the Client, since the Client is an expensive object to create.
We can make the WebTarget injectable using a custom annotation and an InjectionResolver. In the InjectionResolver, we can get the ContainerRequest and get the headers from that, which we will pass to the ClientRequestFilter.
Here it is in action
Create the custom annotation
#Target({ElementType.FIELD})
#Retention(RetentionPolicy.RUNTIME)
public #interface WithHeadersTarget {
String baseUri();
String[] headerNames() default {};
}
Make the InjectionResolver with the custom ClientRequestFilter
private static class WithHeadersTargetInjectionResolver
implements InjectionResolver<WithHeadersTarget> {
private final Provider<ContainerRequest> requestProvider;
private final Client client;
#Inject
public WithHeadersTargetInjectionResolver(Provider<ContainerRequest> requestProvider) {
this.requestProvider = requestProvider;
this.client = ClientBuilder.newClient();
}
#Override
public Object resolve(Injectee injectee, ServiceHandle<?> handle) {
if (injectee.getRequiredType() == WebTarget.class
&& injectee.getParent().isAnnotationPresent(WithHeadersTarget.class)) {
WithHeadersTarget anno = injectee.getParent().getAnnotation(WithHeadersTarget.class);
String uri = anno.baseUri();
String[] headersNames = anno.headerNames();
MultivaluedMap<String, String> requestHeaders = requestProvider.get().getRequestHeaders();
return client.target(uri)
.register(new HeadersFilter(requestHeaders, headersNames));
}
return null;
}
#Override
public boolean isConstructorParameterIndicator() {
return false;
}
#Override
public boolean isMethodParameterIndicator() {
return false;
}
private class HeadersFilter implements ClientRequestFilter {
private final MultivaluedMap<String, String> headers;
private final String[] headerNames;
private HeadersFilter(MultivaluedMap<String, String> headers, String[] headerNames) {
this.headers = headers;
this.headerNames = headerNames;
}
#Override
public void filter(ClientRequestContext requestContext) throws IOException {
// if headers names is empty, add all headers
if (this.headerNames.length == 0) {
for (Map.Entry<String, List<String>> entry: this.headers.entrySet()) {
requestContext.getHeaders().put(entry.getKey(), new ArrayList<>(entry.getValue()));
}
// else just add the headers from the annotation
} else {
for (String header: this.headerNames) {
requestContext.getHeaders().put(header, new ArrayList<>(this.headers.get(header)));
}
}
}
}
}
One thing about this implementation is that it checks for an empty headerNames in the #WithHeadersTarget annotation. If it is empty, then we just forward all headers. If the user specifies some header names, then it will only forward those
Register the InjectionResolver
new ResourceConfig()
.register(new AbstractBinder() {
#Override
protected void configure() {
bind(WithHeadersTargetInjectionResolver.class)
.to(new TypeLiteral<InjectionResolver<WithHeadersTarget>>() {
}).in(Singleton.class);
}
})
Use it
#Path("test")
public static class TestResource {
#WithHeadersTarget(
baseUri = BASE_URI
headerNames = {TEST_HEADER_NAME})
private WebTarget target;
#GET
public String get() {
return target.path("client").request().get(String.class);
}
}
In this example if, the headerNames is left out, then it will default to an empty array, which will cause all the request headers to be forwarded.
Complete test using Jersey Test Framework
import org.glassfish.hk2.api.Injectee;
import org.glassfish.hk2.api.InjectionResolver;
import org.glassfish.hk2.api.ServiceHandle;
import org.glassfish.hk2.api.TypeLiteral;
import org.glassfish.hk2.utilities.binding.AbstractBinder;
import org.glassfish.jersey.filter.LoggingFilter;
import org.glassfish.jersey.server.ContainerRequest;
import org.glassfish.jersey.server.ResourceConfig;
import org.glassfish.jersey.test.JerseyTest;
import org.junit.Test;
import javax.inject.Inject;
import javax.inject.Provider;
import javax.inject.Singleton;
import javax.ws.rs.GET;
import javax.ws.rs.HeaderParam;
import javax.ws.rs.Path;
import javax.ws.rs.client.Client;
import javax.ws.rs.client.ClientBuilder;
import javax.ws.rs.client.ClientRequestContext;
import javax.ws.rs.client.ClientRequestFilter;
import javax.ws.rs.client.WebTarget;
import javax.ws.rs.core.MultivaluedMap;
import javax.ws.rs.core.Response;
import javax.ws.rs.ext.ExceptionMapper;
import java.io.IOException;
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;
import java.net.URI;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.logging.Logger;
import static org.assertj.core.api.Assertions.assertThat;
public class ForwardHeadersTest extends JerseyTest {
private static final String BASE_URI = "http://localhost:8000";
private static final String TEST_HEADER_NAME = "X-Test-Header";
#Target({ElementType.FIELD})
#Retention(RetentionPolicy.RUNTIME)
public #interface WithHeadersTarget {
String baseUri();
String[] headerNames() default {};
}
#Path("test")
public static class TestResource {
#WithHeadersTarget(
baseUri = BASE_URI
)
private WebTarget target;
#GET
public String get() {
return target.path("client").request().get(String.class);
}
}
#Path("client")
public static class ClientResource {
#GET
public String getReversedHeader(#HeaderParam(TEST_HEADER_NAME) String header) {
System.out.println(header);
return new StringBuilder(header).reverse().toString();
}
}
private static class WithHeadersTargetInjectionResolver
implements InjectionResolver<WithHeadersTarget> {
private final Provider<ContainerRequest> requestProvider;
private final Client client;
#Inject
public WithHeadersTargetInjectionResolver(Provider<ContainerRequest> requestProvider) {
this.requestProvider = requestProvider;
this.client = ClientBuilder.newClient();
}
#Override
public Object resolve(Injectee injectee, ServiceHandle<?> handle) {
if (injectee.getRequiredType() == WebTarget.class
&& injectee.getParent().isAnnotationPresent(WithHeadersTarget.class)) {
WithHeadersTarget anno = injectee.getParent().getAnnotation(WithHeadersTarget.class);
String uri = anno.baseUri();
String[] headersNames = anno.headerNames();
MultivaluedMap<String, String> requestHeaders = requestProvider.get().getRequestHeaders();
return client.target(uri)
.register(new HeadersFilter(requestHeaders, headersNames));
}
return null;
}
#Override
public boolean isConstructorParameterIndicator() {
return false;
}
#Override
public boolean isMethodParameterIndicator() {
return false;
}
private class HeadersFilter implements ClientRequestFilter {
private final MultivaluedMap<String, String> headers;
private final String[] headerNames;
private HeadersFilter(MultivaluedMap<String, String> headers, String[] headerNames) {
this.headers = headers;
this.headerNames = headerNames;
}
#Override
public void filter(ClientRequestContext requestContext) throws IOException {
// if headers names is empty, add all headers
if (this.headerNames.length == 0) {
for (Map.Entry<String, List<String>> entry: this.headers.entrySet()) {
requestContext.getHeaders().put(entry.getKey(), new ArrayList<>(entry.getValue()));
}
// else just add the headers from the annotation
} else {
for (String header: this.headerNames) {
requestContext.getHeaders().put(header, new ArrayList<>(this.headers.get(header)));
}
}
}
}
}
#Override
public ResourceConfig configure() {
return new ResourceConfig()
.register(TestResource.class)
.register(ClientResource.class)
.register(new AbstractBinder() {
#Override
protected void configure() {
bind(WithHeadersTargetInjectionResolver.class)
.to(new TypeLiteral<InjectionResolver<WithHeadersTarget>>() {
}).in(Singleton.class);
}
})
.register(new LoggingFilter(Logger.getAnonymousLogger(), true))
.register(new ExceptionMapper<Throwable>() {
#Override
public Response toResponse(Throwable t) {
t.printStackTrace();
return Response.serverError().entity(t.getMessage()).build();
}
});
}
#Override
public URI getBaseUri() {
return URI.create(BASE_URI);
}
#Test
public void testIt() {
final String response = target("test")
.request()
.header(TEST_HEADER_NAME, "HelloWorld")
.get(String.class);
assertThat(response).isEqualTo("dlroWolleH");
}
}

How to call other eureka client in a Zuul server

application.properties
zuul.routes.commonservice.path=/root/path/commonservice/**
zuul.routes.commonservice.service-id=commonservice
zuul.routes.customer.path=/root/path/customer/**
zuul.routes.customer.service-id=customer
zuul.routes.student.path=/root/path/student/**
zuul.routes.student.service-id=student
and below is my custom filter
import com.netflix.zuul.ZuulFilter;
import com.netflix.zuul.context.RequestContext;
import com.openreach.gateway.common.constant.CommonConstant;
import java.util.Enumeration;
import java.util.HashMap;
import java.util.Map;
import javax.servlet.http.HttpSession;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Component;
#Component
public class HeaderFilter extends ZuulFilter {
private static final Logger log = LoggerFactory.getLogger(HeaderFilter.class);
#Override
public String filterType() {
return "pre";
}
#Override
public int filterOrder() {
return 1;
}
#Override
public boolean shouldFilter() {
return true;
}
#Override
public Object run() {
RequestContext context = RequestContext.getCurrentContext();
HttpSession httpSession = context.getRequest().getSession();
String idOrEmail = context.getRequest().getHeader("coustom");
if (httpSession.getAttribute("someAttributes") == null) {
if (idOrEmail != null) {
//call the common-service and get details and set it first
//then call the customer service with common-service details
} else {
//call the customer service
}
} else {
log.info("data excits");
// routrs the request to the backend with the excisting data details
}
context.addZuulResponseHeader("Cookie", "JSESSIONID=" + httpSession.getId());
return null;
}
}
I'm using the ribbon load balancer with zuul. My problem is that how should I call the common-service first? I need all my requests to check the header value and then call the actual service end point.
First, use the #LoadBalanced qualifier to create your RestTemplate bean which is load balanced.
#LoadBalanced
#Bean
public RestTemplate restTemplate() {
return new RestTemplate();
}
And Inject the bean into the filter
#Autowired
RestTemplate restTemplate;
Then you can get result by restTemplate's method like below
String result = restTemplate.postForObject("http://commonservice/url", object, String.class);
ref: http://cloud.spring.io/spring-cloud-static/spring-cloud.html#_spring_resttemplate_as_a_load_balancer_client

How to use Apache CachingHttpAsyncClient with Spring AsyncRestTemplate?

Is it possible to use CachingHttpAsyncClient with AsyncRestTemplate? HttpComponentsAsyncClientHttpRequestFactory expects a CloseableHttpAsyncClient but CachingHttpAsyncClient does not extend it.
This is known as issue SPR-15664 for versions up to 4.3.9 and 5.0.RC2 - fixed in 4.3.10 and 5.0.RC3. The only way around is is creating a custom AsyncClientHttpRequestFactory implementation that is based on the existing HttpComponentsAsyncClientHttpRequestFactory:
// package required for HttpComponentsAsyncClientHttpRequest visibility
package org.springframework.http.client;
import java.io.IOException;
import java.net.URI;
import org.apache.http.client.config.RequestConfig;
import org.apache.http.client.methods.Configurable;
import org.apache.http.client.methods.HttpUriRequest;
import org.apache.http.client.protocol.HttpClientContext;
import org.apache.http.impl.client.cache.CacheConfig;
import org.apache.http.impl.client.cache.CachingHttpAsyncClient;
import org.apache.http.impl.nio.client.CloseableHttpAsyncClient;
import org.apache.http.impl.nio.client.HttpAsyncClients;
import org.apache.http.protocol.HttpContext;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.http.HttpMethod;
import org.springframework.util.Assert;
// TODO add support for other CachingHttpAsyncClient otpions, e.g. HttpCacheStorage
public class HttpComponentsCachingAsyncClientHttpRequestFactory extends HttpComponentsClientHttpRequestFactory implements AsyncClientHttpRequestFactory, InitializingBean {
private final CloseableHttpAsyncClient wrappedHttpAsyncClient;
private final CachingHttpAsyncClient cachingHttpAsyncClient;
public HttpComponentsCachingAsyncClientHttpRequestFactory() {
this(HttpAsyncClients.createDefault(), CacheConfig.DEFAULT);
}
public HttpComponentsCachingAsyncClientHttpRequestFactory(final CacheConfig config) {
this(HttpAsyncClients.createDefault(), config);
}
public HttpComponentsCachingAsyncClientHttpRequestFactory(final CloseableHttpAsyncClient client) {
this(client, CacheConfig.DEFAULT);
}
public HttpComponentsCachingAsyncClientHttpRequestFactory(final CloseableHttpAsyncClient client, final CacheConfig config) {
Assert.notNull(client, "HttpAsyncClient must not be null");
wrappedHttpAsyncClient = client;
cachingHttpAsyncClient = new CachingHttpAsyncClient(client, config);
}
#Override
public void afterPropertiesSet() {
startAsyncClient();
}
private void startAsyncClient() {
if (!wrappedHttpAsyncClient.isRunning()) {
wrappedHttpAsyncClient.start();
}
}
#Override
public ClientHttpRequest createRequest(final URI uri, final HttpMethod httpMethod) throws IOException {
throw new IllegalStateException("Synchronous execution not supported");
}
#Override
public AsyncClientHttpRequest createAsyncRequest(final URI uri, final HttpMethod httpMethod) throws IOException {
startAsyncClient();
final HttpUriRequest httpRequest = createHttpUriRequest(httpMethod, uri);
postProcessHttpRequest(httpRequest);
HttpContext context = createHttpContext(httpMethod, uri);
if (context == null) {
context = HttpClientContext.create();
}
// Request configuration not set in the context
if (context.getAttribute(HttpClientContext.REQUEST_CONFIG) == null) {
// Use request configuration given by the user, when available
RequestConfig config = null;
if (httpRequest instanceof Configurable) {
config = ((Configurable) httpRequest).getConfig();
}
if (config == null) {
config = createRequestConfig(cachingHttpAsyncClient);
}
if (config != null) {
context.setAttribute(HttpClientContext.REQUEST_CONFIG, config);
}
}
return new HttpComponentsAsyncClientHttpRequest(cachingHttpAsyncClient, httpRequest, context);
}
#Override
public void destroy() throws Exception {
try {
super.destroy();
} finally {
wrappedHttpAsyncClient.close();
}
}
}

Resources