spring boot validation for Long values - spring

This is class against which we are going to map the incoming request
#Getter
#Setter
public class FooRequest {
#Size(max = 255, message = "{error.foo.name.size}")
private String name;
#Digits(integer = 15, fraction = 0, message = "{error.foo.fooId.size}")
private Long fooId;
#Digits(integer = 15, fraction = 0, message = "{error.foo.barId.size}")
private Long barId;
}
I have used javax.validation.constraints.* like above. If we send request like
{
"name": "Test",
"fooId": "0001234567",
"barId": "0003456789"
}
Then It works fine and we are able to save the results in the database but if we send it like:
{
"name": "Test",
"fooId": 0001234567,
"barId": 0003456789
}
Then we are getting 400 Bad Request. I am not getting it what wrong am I doing, I just want to ensure that user sends digits, having length between 1-15 and wants to map it against the Long variable. Is it because of fraction or because all these values are starting with 0?

The second JSON is not a valid json because of the leading zeroes.
Background
Spring uses Jackson library for JSON interactions.
The Jackson's ObjectMapper by default throws if you try to parse the second JSON:
public class Main {
public static void main(String[] args) throws IOException {
ObjectMapper objectMapper = new ObjectMapper();
objectMapper.readValue("{\"name\": \"Test\", \"fooId\": 0001234567, \"barId\": 0003456789}", FooRequest.class);
}
}
The exception is:
Exception in thread "main" com.fasterxml.jackson.core.JsonParseException: Invalid numeric value: Leading zeroes not allowed
at [Source: (String)"{"name": "Test", "fooId": 0001234567, "barId": 0003456789}"; line: 1, column: 28]
One can allow leading zeroes via the JsonParser.Feature.ALLOW_NUMERIC_LEADING_ZEROS:
public class Main {
public static void main(String[] args) throws IOException {
ObjectMapper objectMapper = new ObjectMapper();
objectMapper.configure(JsonParser.Feature.ALLOW_NUMERIC_LEADING_ZEROS, true);
FooRequest fooRequest = objectMapper.readValue("{\"name\": \"Test\", \"fooId\": 0001234567, \"barId\": 0003456789}", FooRequest.class);
System.out.println(fooRequest.getBarId());
}
}
Or in spring via the Spring Boot's application.properties:
spring.jackson.parser.allow-numeric-leading-zeros=true
then, the second JSON will be parsed successfully.
Why does it work with the first JSON?
Because by default Jackson's MapperFeature.ALLOW_COERCION_OF_SCALARS is turned on.
From its javadoc:
When feature is enabled, conversions from JSON String are allowed, as long as textual value matches (for example, String "true" is allowed as equivalent of JSON boolean token true; or String "1.0" for double).
because all these values are starting with 0?
So it turns out that, yes, but for a slightly different reason

Related

Spring validation: How to validate that a field in RequestBody is a string and not a number being casted [duplicate]

I want to identify numerical values inserted without quotation marks (as strings) in JSON sent through the request body of a POST request:
For example, this would be the wrong JSON format as the age field does not contain quotation marks:
{
"Student":{
"Name": "John",
"Age": 12
}
}
The correct JSON format would be:
{
"Student":{
"Name": "John",
"Age": "12"
}
}
In my code, I've defined the datatype of the age field as a String, hence "12" should be the correct input. However, no error message is thrown, even when 12 is used.
It seems Jackson automatically converts the numerical values into strings. How can I identify numerical values and return a message?
This is what I tried so far to identify these numerical values:
public List<Student> getMultiple(StudentDTO Student) {
if(Student.getAge().getClass()==String.class) {
System.out.println("Age entered correctly as String");
} else{
System.out.println("Please insert age value inside inverted commas");
}
}
However, this is not printing "Please insert age value inside inverted commas" to the console when the age is inserted without quotation marks.
If you're using Spring boot, by default it uses Jackson to parse JSON. There's no configuration option within Jackson to disable this feature, as mentioned within this issue. The solution is to register a custom JsonDeserializer that throws an exception as soon as it encounters any other token than JsonToken.VALUE_STRING
public class StringOnlyDeserializer extends JsonDeserializer<String> {
#Override
public String deserialize(JsonParser jsonParser, DeserializationContext deserializationContext) throws IOException {
if (!JsonToken.VALUE_STRING.equals(jsonParser.getCurrentToken())) {
throw deserializationContext.wrongTokenException(jsonParser, String.class, JsonToken.VALUE_STRING, "No type conversion is allowed, string expected");
} else {
return jsonParser.getValueAsString();
}
}
}
If you only want to apply this to certain classes or fields, you can annotate those with the #JsonDeserialize annotation. For example:
public class Student {
private String name;
#JsonDeserialize(using = StringOnlyDeserializer.class)
private String age;
// TODO: Getters + Setters
}
Alternatively, you can register a custom Jackson module by registering a SimpleModule bean that automatically deserializes all strings using the StringOnlyDeserializer. For example:
#Bean
public Module customModule() {
SimpleModule customModule = new SimpleModule();
customModule.addDeserializer(String.class, new StringOnlyDeserializer());
return customModule;
}
This is similar to what Eugen suggested.
If you run your application now, and you pass an invalid age, such as 12, 12.3 or [12]it will throw an exception with a message like:
JSON parse error: Unexpected token (VALUE_NUMBER_FLOAT), expected VALUE_STRING: Not allowed to parse numbers to string; nested exception is com.fasterxml.jackson.databind.exc.MismatchedInputException: Unexpected token (VALUE_NUMBER_FLOAT), expected VALUE_STRING: Not allowed to parse numbers to string\n at [Source: (PushbackInputStream); line: 3, column: 9] (through reference chain: com.example.xyz.Student[\"age\"])
By default, Jackson converts the scalar values to String when the target field is of String type. The idea is to create a custom deserializer for String type and comment out the conversion part:
package jackson.deserializer;
import java.io.IOException;
import com.fasterxml.jackson.core.*;
import com.fasterxml.jackson.databind.*;
import com.fasterxml.jackson.databind.deser.std.StringDeserializer;
public class CustomStringDeserializer extends StringDeserializer
{
public final static CustomStringDeserializer instance = new CustomStringDeserializer();
#Override
public String deserialize(JsonParser p, DeserializationContext ctxt) throws IOException {
if (p.hasToken(JsonToken.VALUE_STRING)) {
return p.getText();
}
JsonToken t = p.getCurrentToken();
// [databind#381]
if (t == JsonToken.START_ARRAY) {
return _deserializeFromArray(p, ctxt);
}
// need to gracefully handle byte[] data, as base64
if (t == JsonToken.VALUE_EMBEDDED_OBJECT) {
Object ob = p.getEmbeddedObject();
if (ob == null) {
return null;
}
if (ob instanceof byte[]) {
return ctxt.getBase64Variant().encode((byte[]) ob, false);
}
// otherwise, try conversion using toString()...
return ob.toString();
}
// allow coercions for other scalar types
// 17-Jan-2018, tatu: Related to [databind#1853] avoid FIELD_NAME by ensuring it's
// "real" scalar
/*if (t.isScalarValue()) {
String text = p.getValueAsString();
if (text != null) {
return text;
}
}*/
return (String) ctxt.handleUnexpectedToken(_valueClass, p);
}
}
Now register this deserializer:
#Bean
public Module customStringDeserializer() {
SimpleModule module = new SimpleModule();
module.addDeserializer(String.class, CustomStringDeserializer.instance);
return module;
}
When an integer is send and String is expected, here is the error:
{"timestamp":"2019-04-24T15:15:58.968+0000","status":400,"error":"Bad
Request","message":"JSON parse error: Cannot deserialize instance of
java.lang.String out of VALUE_NUMBER_INT token; nested exception is
com.fasterxml.jackson.databind.exc.MismatchedInputException: Cannot
deserialize instance of java.lang.String out of VALUE_NUMBER_INT
token\n at [Source: (PushbackInputStream); line: 3, column: 13]
(through reference chain:
org.hello.model.Student[\"age\"])","path":"/hello/echo"}

Testing a REST endpoint with Spring, MongoDB using ObjectIds

I'm new to MongoDB and I'm writing a series of unit tests for a Mongo-backed REST web-service. Here's a simple test for a /clients/{id} enpoint :
#RunWith(MockitoJUnitRunner.class)
public class ClientsControllerMockMvcStandaloneTest {
private MockMvc mvc;
#Mock
private ClientsRepository clientsRepository;
#Mock
private ModelMapper modelMapper;
#InjectMocks
private ClientsController clientsController;
private ExceptionHandlerExceptionResolver createExceptionResolver() {
ExceptionHandlerExceptionResolver exceptionResolver = new ExceptionHandlerExceptionResolver() {
#SuppressWarnings("ConstantConditions")
#Override
protected ServletInvocableHandlerMethod getExceptionHandlerMethod(final HandlerMethod handlerMethod,
final Exception exception) {
final Method method = new ExceptionHandlerMethodResolver(RestResponseEntityExceptionHandler.class)
.resolveMethod(exception);
final RestResponseEntityExceptionHandler handler = new RestResponseEntityExceptionHandler();
return new ServletInvocableHandlerMethod(handler, method);
}
};
exceptionResolver.getMessageConverters().add(new MappingJackson2HttpMessageConverter());
exceptionResolver.afterPropertiesSet();
return exceptionResolver;
}
#Before
public void setup() {
JacksonTester.initFields(this, new ObjectMapper());
mvc = MockMvcBuilders.standaloneSetup(clientsController)
.setHandlerExceptionResolvers(createExceptionResolver())
.build();
}
// GET /api/clients/{id} 200
#Test
public void findById_ClientEntryFound_ShouldReturnFoundClientEntry() throws Exception {
final ObjectId id = new ObjectId();
final Client client = Client.builder()
.id(id)
.name("Microsoft")
.build();
final ClientDTO clientDTO = ClientDTO.builder()
.id(id)
.name("Microsoft")
.build();
when(clientsRepository.findById(id))
.thenReturn(Optional.of(client));
when(modelMapper.map(client, ClientDTO.class))
.thenReturn(clientDTO);
mvc.perform(get("/clients/" + id.toString())
.accept(TestUtils.APPLICATION_JSON_UTF8))
.andExpect(content().contentType(TestUtils.APPLICATION_JSON_UTF8))
.andExpect(status().isOk())
.andExpect(jsonPath("$.id", is(id)))
.andExpect(jsonPath("$.name", is("Microsoft")))
.andDo(MockMvcResultHandlers.print());
verify(modelMapper, times(1)).map(client, ClientDTO.class);
verify(clientsRepository, times(1)).findById(id);
verifyNoMoreInteractions(clientsRepository);
}
}
I expect this to work but I'm getting the following :
java.lang.AssertionError: JSON path "$.id"
Expected: is <5c9b9a0289d2b311b150b92c>
but: was <{timestamp=1553701378, machineIdentifier=9032371, processIdentifier=4529, counter=5290284, timeSecond=1553701378, time=1553701378000, date=1553701378000}>
Expected :is <5c9b9a0289d2b311b150b92c>
Actual :<{timestamp=1553701378, machineIdentifier=9032371, processIdentifier=4529, counter=5290284, timeSecond=1553701378, time=1553701378000, date=1553701378000}>
<Click to see difference>
Any help would be appreciated (including any pointers if you think my general approach could be improved!).
Cheers!
Jackson doesn't know your ObjectId instance should be serialized as 5c9b9a0289d2b311b150b92c and not as:
{
"timestamp": 1553701378,
"machineIdentifier": 9032371,
"processIdentifier": 4529,
"counter": 5290284,
"time": 1553701378000,
"date": 1553701378000,
"timeSecond": 1553701378
}
Luckily it's easy to fix. The ObjectId#toString() method (which will internally invoke ObjectId#toHexString()) allows you to convert the ObjectId instance into a 24-byte hexadecimal string representation.
So you could use #JsonSerialize along with ToStringSerializer to have the ObjectId instance represented as a string:
#JsonSerialize(using = ToStringSerializer.class)
private ObjectId id;
Then, in your test, use the ObjectId#toString() method (or ObjectId#toHexString()) for the assertion:
.andExpect(jsonPath("$.id", is(id.toString())))
Alternatively, assuming that you are using Spring Data for MongoDB, instead of ObjectId, you could use:
#Id
private String id;
You also could handle the conversion of ObjectId to String in your mapper layer.

Gson: How do I deserialize an inner JSON object to a map if the property name is not fixed?

My client retrieves JSON content as below:
{
"table": "tablename",
"update": 1495104575669,
"rows": [
{"column5": 11, "column6": "yyy"},
{"column3": 22, "column4": "zzz"}
]
}
In rows array content, the key is not fixed. I want to retrieve the key and value and save into a Map using Gson 2.8.x.
How can I configure Gson to simply use to deserialize?
Here is my idea:
public class Dataset {
private String table;
private long update;
private List<Rows>> lists; <-- little confused here.
or private List<HashMap<String,Object> lists
Setter/Getter
}
public class Rows {
private HashMap<String, Object> map;
....
}
Dataset k = gson.fromJson(jsonStr, Dataset.class);
log.info(k.getRows().size()); <-- I got two null object
Thanks.
Gson does not support such a thing out of box. It would be nice, if you can make the property name fixed. If not, then you can have a few options that probably would help you.
Just rename the Dataset.lists field to Dataset.rows, if the property name is fixed, rows.
If the possible name set is known in advance, suggest Gson to pick alternative names using the #SerializedName.
If the possible name set is really unknown and may change in the future, you might want to try to make it fully dynamic using a custom TypeAdapter (streaming mode; requires less memory, but harder to use) or a custom JsonDeserializer (object mode; requires more memory to store intermediate tree views, but it's easy to use) registered with GsonBuilder.
For option #2, you can simply add the names of name alternatives:
#SerializedName(value = "lists", alternate = "rows")
final List<Map<String, Object>> lists;
For option #3, bind a downstream List<Map<String, Object>> type adapter trying to detect the name dynamically. Note that I omit the Rows class deserialization strategy for simplicity (and I believe you might want to remove the Rows class in favor of simple Map<String, Object> (another note: use Map, try not to specify collection implementations -- hash maps are unordered, but telling Gson you're going to deal with Map would let it to pick an ordered map like LinkedTreeMap (Gson internals) or LinkedHashMap that might be important for datasets)).
// Type tokens are immutable and can be declared constants
private static final TypeToken<String> stringTypeToken = new TypeToken<String>() {
};
private static final TypeToken<Long> longTypeToken = new TypeToken<Long>() {
};
private static final TypeToken<List<Map<String, Object>>> stringToObjectMapListTypeToken = new TypeToken<List<Map<String, Object>>>() {
};
private static final Gson gson = new GsonBuilder()
.registerTypeAdapterFactory(new TypeAdapterFactory() {
#Override
public <T> TypeAdapter<T> create(final Gson gson, final TypeToken<T> typeToken) {
if ( typeToken.getRawType() != Dataset.class ) {
return null;
}
// If the actual type token represents the Dataset class, then pick the bunch of downstream type adapters
final TypeAdapter<String> stringTypeAdapter = gson.getDelegateAdapter(this, stringTypeToken);
final TypeAdapter<Long> primitiveLongTypeAdapter = gson.getDelegateAdapter(this, longTypeToken);
final TypeAdapter<List<Map<String, Object>>> stringToObjectMapListTypeAdapter = stringToObjectMapListTypeToken);
// And compose the bunch into a single dataset type adapter
final TypeAdapter<Dataset> datasetTypeAdapter = new TypeAdapter<Dataset>() {
#Override
public void write(final JsonWriter out, final Dataset dataset) {
// Omitted for brevity
throw new UnsupportedOperationException();
}
#Override
public Dataset read(final JsonReader in)
throws IOException {
in.beginObject();
String table = null;
long update = 0;
List<Map<String, Object>> lists = null;
while ( in.hasNext() ) {
final String name = in.nextName();
switch ( name ) {
case "table":
table = stringTypeAdapter.read(in);
break;
case "update":
update = primitiveLongTypeAdapter.read(in);
break;
default:
lists = stringToObjectMapListTypeAdapter.read(in);
break;
}
}
in.endObject();
return new Dataset(table, update, lists);
}
}.nullSafe(); // Making the type adapter null-safe
#SuppressWarnings("unchecked")
final TypeAdapter<T> typeAdapter = (TypeAdapter<T>) datasetTypeAdapter;
return typeAdapter;
}
})
.create();
final Dataset dataset = gson.fromJson(jsonReader, Dataset.class);
System.out.println(dataset.lists);
The code above would print then:
[{column5=11.0, column6=yyy}, {column3=22.0, column4=zzz}]

Define prefix root node for Jackson Serialization/Deserialization YAML document to POJO with Prefix

I found https://github.com/FasterXML/jackson-dataformat-yaml to deserialize/serialize YAML files. However, I'm having a hard time to deserialize/serialize the following:
I want to define a prefix to the actual document to be parsed as a POJO. Similar to a subtree of the document.
I want to define the POJO that represents the simple object representation instead of creating multiple objects.
The Error "Unrecognized field "spring" (class ConfigServerProperties), not marked as ignorable (one known property: "repos"])" is shown. But I don't know how to represent the prefix "spring.cloud.config.server.git" to be the root element of the POJO.
Document
spring:
cloud:
config:
server:
git:
repos:
publisher:
uri: 'https://github.company.com/toos/spring-cloud-config-publisher-config'
cloneOnStart: true
username: myuser
password: password
pullOnRequest: false
differentProperty: My Value
config_test_server_config:
uri: 'https://github.company.com/mdesales/config-test-server-config'
cloneOnStart: true
username: 226b4bb85aa131cd6393acee9c484ec426111d16
password: ""
completelyDifferentProp: this is a different one
For this document, the requirements are as follows:
* I want to define the prefix as "spring.cloud.config.server.git".
* I want to create a POJO that represents the object.
POJO
I created the following POJOs to represent this.
ConfigServerProperties: represents the top pojo containing the list of repos.
ConfigServerOnboard: represents each of the elements of the document.
Each properties are stored in a map, so that we can add as many different properties as possible.
Each class is as follows:
public class ConfigServerProperties {
private Map<String, ConfigServerOnboard> repos;
public void setRepos(Map<String, ConfigServerOnboard> repos) {
this.repos = repos;
}
public Map<String, ConfigServerOnboard> getRepos() {
return this.repos;
}
}
The second class is as follows:
public class ConfigServerOnboard {
private Map<String, String> properties;
public Map<String, String> getProperties() {
return properties;
}
public void setProperties(Map<String, String> properties) {
this.properties = properties;
}
}
Deserialize
The deserialization strategy I tried is as follows:
public static ConfigServerProperties parseProperties(File filePath)
throws JsonParseException, JsonMappingException, IOException {
ObjectMapper mapper = new ObjectMapper(new YAMLFactory());
JsonNodeFactory jsonNodeFactory = new JsonNodeFactory(false);
jsonNodeFactory.textNode("spring.cloud.config");
// tried to use this attempting to get the prefix
mapper.setNodeFactory(jsonNodeFactory);
ConfigServerProperties user = mapper.readValue(filePath, ConfigServerProperties.class);
return user;
}
Error Returned
Exception in thread "main" com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException: Unrecognized field "spring" (class com.company.platform.config.onboarding.files.config.model.ConfigServerProperties), not marked as ignorable (one known property: "repos"])
at [Source: /tmp/config-server-onboards.yml; line: 3, column: 3] (through reference chain: com.company.platform.config.onboarding.files.config.model.ConfigServerProperties["spring"])
at com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:62)
at com.fasterxml.jackson.databind.DeserializationContext.handleUnknownProperty(DeserializationContext.java:834)
at com.fasterxml.jackson.databind.deser.std.StdDeserializer.handleUnknownProperty(StdDeserializer.java:1094)
at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownProperty(BeanDeserializerBase.java:1470)
at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownVanilla(BeanDeserializerBase.java:1448)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:282)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:140)
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:3798)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2740)
at com.company.platform.config.onboarding.files.config.model.ConfigServerProperties.parseProperties(ConfigServerProperties.java:37)
at com.company.platform.config.onboarding.files.config.model.ConfigServerProperties.main(ConfigServerProperties.java:42)
Edit 1: Looking for a possible SpringBoot Solution
I'm open to solutions using SpringBoot's ConfigurationProperties("spring.cloud.config.server.git"). That way, we could have the following:
#ConfigurationProperties("spring.cloud.config.server.git")
public class Configuration {
private Map<String, Map<String, String>> repos = new LinkedHashMap<String, new HashMap<String, String>>();
// getter/setter
}
Questions
How to set the root element of the document?
Deserialization must read the document and produce instances of the POJOs.
Serialization must produce the same document with updated values.
I had to come up with the following:
Create 6 classes, each of them with the property required for the prefix "spring.cloud.config.server.git"
SpringCloudConfigSpring.java
SpringCloudConfigCloud.java
SpringCloudConfigConfig.java
SpringCloudConfigServer.java
SpringCloudConfigGit.java
The holder of all of them is SpringCloudConfigFile.java.
The holder and all the classes have a reference to the next property, which has a reference to the next, etc, with their own setter/getter methods as usual.
public class SpringCloudConfigSpring {
private SpringCloudConfigCloud cloud;
public SpringCloudConfigCloud getCloud() {
return cloud;
}
public void setCloud(SpringCloudConfigCloud cloud) {
this.cloud = cloud;
}
}
Implemented the representation of the map easily.
The last one I used the reference of a TreeMap to keep the keys sorted, another map to represent any property that may be added, without changing the representation.
public class SpringCloudConfigGit {
TreeMap<String, Map<String, Object>> repos;
public TreeMap<String, Map<String, Object>> getRepos() {
return repos;
}
public void setRepos(TreeMap<String, Map<String, Object>> repos) {
this.repos = repos;
}
}
Results
Creating the verification as follows:
public static void main(String[] args) throws JsonParseException, JsonMappingException, IOException {
File config = new File("/tmp/config-server-onboards.yml");
SpringCloudConfigFile props = ConfigServerProperties.parseProperties(config);
props.getSpring().getCloud().getConfig().getServer().getGit().getRepos().forEach((appName, properties) -> {
System.out.println("################## " + appName + " #######################3");
System.out.println(properties);
if (appName.equals("github_pages_reference")) {
properties.put("name", "Marcello");
properties.put("cloneOnStart", true);
}
System.out.println("");
});
saveProperties(new File(config.getAbsoluteFile().getParentFile(), "updated-config-onboards.yml"), props);
}
The output is as follows:
################## config_onboarding #######################3
{uri=https://github.company.com/servicesplatform-tools/spring-cloud-config-onboarding-config, cloneOnStart=true, username=226b4bb85aa131cd6393acee9c484ec426111d16, password=, pullOnRequest=false}
################## config_test_server_config #######################3
{uri=https://github.company.com/rlynch2/config-test-server-config, cloneOnStart=true, username=226b4bb85aa131cd6393acee9c484ec426111d16, password=, pullOnRequest=false}
################## github_pages_reference #######################3
{uri=https://github.company.com/servicesplatform-tools/spring-cloud-config-reference-service-config, cloneOnStart=true, username=226b4bb85aa131cd6393acee9c484ec426111d16, password=, pullOnRequest=false}
There are obvious improvements required:
I'd like to have a solution with a single class;
I'd like to have a an ObjetMapper method that specifies the "subtree" of the YAML object tree that I'd like to parse.
Maybe a more sophisticated SpringBoot-like #ConfigurationProperties("spring.cloud.config.server.git") would help.
Helper methods for loading and saving the state of these instances.
Load Method
public static SpringCloudConfigFile parseProperties(File filePath)
throws JsonParseException, JsonMappingException, IOException {
ObjectMapper mapper = new ObjectMapper(new YAMLFactory());
SpringCloudConfigFile file = mapper.readValue(filePath, SpringCloudConfigFile.class);
return file;
}
Save Properties
public static void saveProperties(File filePath, SpringCloudConfigFile file) throws JsonGenerationException, JsonMappingException, IOException {
ObjectMapper mapper = new ObjectMapper(new YAMLFactory());
mapper.writeValue(filePath, file);
}
File Saved
It maintained the sorted keys as implemented.

ApacheConnector does not process request headers that were set in a WriterInterceptor

I am experiencing problems when configurating my Jersey Client with the ApacheConnector. It seems to ignore all request headers that I define in a WriterInterceptor. I can tell that the WriterInterceptor is called when I set a break point within WriterInterceptor#aroundWriteTo(WriterInterceptorContext). Contrary to that, I can observe that the modification of an InputStream is preserved.
Here is a runnable example demonstrating my problem:
public class ApacheConnectorProblemDemonstration extends JerseyTest {
private static final Logger LOGGER = Logger.getLogger(JerseyTest.class.getName());
private static final String QUESTION = "baz", ANSWER = "qux";
private static final String REQUEST_HEADER_NAME_CLIENT = "foo-cl", REQUEST_HEADER_VALUE_CLIENT = "bar-cl";
private static final String REQUEST_HEADER_NAME_INTERCEPTOR = "foo-ic", REQUEST_HEADER_VALUE_INTERCEPTOR = "bar-ic";
private static final int MAX_CONNECTIONS = 100;
private static final String PATH = "/";
#Path(PATH)
public static class TestResource {
#POST
public String handle(InputStream questionStream,
#HeaderParam(REQUEST_HEADER_NAME_CLIENT) String client,
#HeaderParam(REQUEST_HEADER_NAME_INTERCEPTOR) String interceptor)
throws IOException {
assertEquals(REQUEST_HEADER_VALUE_CLIENT, client);
// Here, the header that was set in the client's writer interceptor is lost.
assertEquals(REQUEST_HEADER_VALUE_INTERCEPTOR, interceptor);
// However, the input stream got gzipped so the WriterInterceptor has been partly applied.
assertEquals(QUESTION, new Scanner(new GZIPInputStream(questionStream)).nextLine());
return ANSWER;
}
}
#Provider
#Priority(Priorities.ENTITY_CODER)
public static class ClientInterceptor implements WriterInterceptor {
#Override
public void aroundWriteTo(WriterInterceptorContext context)
throws IOException, WebApplicationException {
context.getHeaders().add(REQUEST_HEADER_NAME_INTERCEPTOR, REQUEST_HEADER_VALUE_INTERCEPTOR);
context.setOutputStream(new GZIPOutputStream(context.getOutputStream()));
context.proceed();
}
}
#Override
protected Application configure() {
enable(TestProperties.LOG_TRAFFIC);
enable(TestProperties.DUMP_ENTITY);
return new ResourceConfig(TestResource.class);
}
#Override
protected Client getClient(TestContainer tc, ApplicationHandler applicationHandler) {
ClientConfig clientConfig = tc.getClientConfig() == null ? new ClientConfig() : tc.getClientConfig();
clientConfig.property(ApacheClientProperties.CONNECTION_MANAGER, makeConnectionManager(MAX_CONNECTIONS));
clientConfig.register(ClientInterceptor.class);
// If I do not use the Apache connector, I avoid this problem.
clientConfig.connector(new ApacheConnector(clientConfig));
if (isEnabled(TestProperties.LOG_TRAFFIC)) {
clientConfig.register(new LoggingFilter(LOGGER, isEnabled(TestProperties.DUMP_ENTITY)));
}
configureClient(clientConfig);
return ClientBuilder.newClient(clientConfig);
}
private static ClientConnectionManager makeConnectionManager(int maxConnections) {
PoolingClientConnectionManager connectionManager = new PoolingClientConnectionManager();
connectionManager.setMaxTotal(maxConnections);
connectionManager.setDefaultMaxPerRoute(maxConnections);
return connectionManager;
}
#Test
public void testInterceptors() throws Exception {
Response response = target(PATH)
.request()
.header(REQUEST_HEADER_NAME_CLIENT, REQUEST_HEADER_VALUE_CLIENT)
.post(Entity.text(QUESTION));
assertEquals(200, response.getStatus());
assertEquals(ANSWER, response.readEntity(String.class));
}
}
I want to use the ApacheConnector in order to optimize for concurrent requests via the PoolingClientConnectionManager. Did I mess up the configuration?
PS: The exact same problem occurs when using the GrizzlyConnector.
After further research, I assume that this is rather a misbehavior in the default Connector that uses a HttpURLConnection. As I explained in this other self-answered question of mine, the documentation states:
Whereas filters are primarily intended to manipulate request and
response parameters like HTTP headers, URIs and/or HTTP methods,
interceptors are intended to manipulate entities, via manipulating
entity input/output streams
A WriterInterceptor is not supposed to manipulate the header values while a {Client,Server}RequestFilter is not supposed to manipulate the entity stream. If you need to use both, both components should be bundled within a javax.ws.rs.core.Feature or within the same class that implements two interfaces. (This can be problematic if you need to set two different Prioritys though.)
All this is very unfortunate though, since JerseyTest uses the Connector that uses a HttpURLConnection such that all my unit tests succeeded while the real life application misbehaved since it was configured with an ApacheConnector. Also, rather than suppressing changes, I wished, Jersey would throw me some exceptions. (This is a general issue I have with Jersey. When I for example used a too new version of the ClientConnectionManager where the interface was renamed to HttpClientConnectionManager I simply was informed in a one line log statement that all my configuration efforts were ignored. I did not discover this log statement til very late in development.)

Resources