Keycloak 400 bad request when [state] is old - spring-boot

I have a Spring Boot application protected by Keycloak (15.0.2).
Let's say that I have a browser with two tabs opened for the same user and that user is authenticated.
Now I logout in tab 1 and I get the Keycloak login form with the following Url:
http://localhost:8180/auth/realms/AUTOTEST_PG/protocol/openid-connect/auth?response_type=code&client_id=mcd-client&state=e01b8a88-6945-4c56-9ee1-46fb156d33be&login=true&scope=openid
Now I logout in tab 2 and I get the Keycloak login form with the following Url:
http://localhost:8180/auth/realms/AUTOTEST_PG/protocol/openid-connect/auth?response_type=code&client_id=mcd-client&state=b9607387-7a05-4226-8313-4c5c80a1b145&login=true&scope=openid
Next, I tried to login in tab 1 and I get a 400 Bad Request response.
Looking at the Spring Boot log I get the following:
14554520 2021-12-14 00:02:06,100 [ajp-nio-0.0.0.0-8009-exec-8] DEBUG sample.controller.filters.MyKeycloakAuthenticationProcessingFilter ?:? - - - - - attemptAuthentication
14554520 2021-12-14 00:02:06,100 [ajp-nio-0.0.0.0-8009-exec-8] DEBUG sample.security.RealmProvider ?:? - - - - - getRealmName - requestUri: https://10.161.54.36/sso/login?redirect_url=/sctools/&state=e01b8a88-6945-4c56-9ee1-46fb156d33be&session_state=4fe8d159-5ada-4522-a422-07c5a0ce9d15&code=c5dda8ef-0fd9-4323-9922-fa5c73b28f89.4fe8d159-5ada-4522-a422-07c5a0ce9d15.442a440e-6df0-40c7-b8cb-3a52123430a0
14554520 2021-12-14 00:02:06,100 [ajp-nio-0.0.0.0-8009-exec-8] WARN org.keycloak.adapters.OAuthRequestAuthenticator ?:? - - - - - state parameter invalid
14554520 2021-12-14 00:02:06,100 [ajp-nio-0.0.0.0-8009-exec-8] WARN org.keycloak.adapters.OAuthRequestAuthenticator ?:? - - - - - cookie: b9607387-7a05-4226-8313-4c5c80a1b145
14554520 2021-12-14 00:02:06,100 [ajp-nio-0.0.0.0-8009-exec-8] WARN org.keycloak.adapters.OAuthRequestAuthenticator ?:? - - - - - queryParam: e01b8a88-6945-4c56-9ee1-46fb156d33be
14554521 2021-12-14 00:02:06,101 [ajp-nio-0.0.0.0-8009-exec-8] DEBUG sample.controller.filters.MyKeycloakAuthenticationProcessingFilter ?:? - - - - - unsuccessfulAuthentication - committed: true
14554521 2021-12-14 00:02:06,101 [ajp-nio-0.0.0.0-8009-exec-8] DEBUG sample.security.MyKeycloakAuthenticationFailureHandler ?:? - - - - - onAuthenticationFailure org.keycloak.adapters.springsecurity.KeycloakAuthenticationException: Invalid authorization header, see WWW-Authenticate header for details
14554521 2021-12-14 00:02:06,101 [ajp-nio-0.0.0.0-8009-exec-8] DEBUG sample.security.MyKeycloakAuthenticationFailureHandler ?:? - - - - - onAuthenticationFailure - response isCommitted - Status: 400
It seems that the cookie sent and the state value doesn't match and that is the reason why I get a bad request.
How can I login from any of the tabs without getting the Bad Request ?
Note: If I go to the tab2 and login, then I get a message telling me that the user is already login.
UPDATE
When doing the same operation on the Keycloak console, if the user logout from one tab the second tab is automatically logout and the Url doesn't get an state variable it gets a challenge variable, something like:
http://10.161.54.36:8180/auth/realms/master/protocol/openid-connect/auth?client_id=security-admin-console&redirect_uri=http%3A%2F%2F10.161.54.36%3A8180%2Fauth%2Fadmin%2Fmaster%2Fconsole%2F%23%2Frealms&state=c3015df8-4c50-4454-be8d-0555f25e3bd0&response_mode=fragment&response_type=code&scope=openid&nonce=acb2a07d-c4a3-4cd3-8a7a-e23580a97d14&code_challenge=2W6-09eeD_WEwtWct3a5MojpIQJMe-9brcOH-7fbT6A&code_challenge_method=S256
How is it done ?

Related

Spring Upgrade to 5.3.19 - Response Body truncated

We are using a legacy app which uses Spring MVC and recently upgraded to Spring 5.3.19 version. This application is deployed in WebSphere ND 9.0.5.3 and since spring upgrade we are seeing issue with the response string getting truncated. Please see below log where it shows the complete the return string ["HUM150","HUM690"] but browser receives only the truncated text ["HUM150
2022-05-17/16:18:49.600/GMT [WebContainer : 1] [INFO ] [WebAppclass AddNewRecordController] - Inside getExceptionRCDetails in AddNewRecordController class : ["HUM150","HUM690"]
2022-05-17/16:18:49.600/GMT [WebContainer : 1] [DEBUG] [org.springframework.web.servlet.mvc.method.annotation.RequestResponseBodyMethodProcessor] - Using 'text/plain', given [/] and supported [text/plain, /, application/json, application/*+json]
2022-05-17/16:18:49.600/GMT [WebContainer : 1] [TRACE] [org.springframework.web.servlet.mvc.method.annotation.RequestResponseBodyMethodProcessor] - Writing ["["HUM150","HUM690"]"]
2022-05-17/16:18:49.601/GMT [WebContainer : 1] [TRACE] [org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter] - Applying default cacheSeconds=-1

How can I turn off netty client DNS connect retry?

I'm using Netty-httpClient for Spring webClient.
When I connect www.naver.com and set the host like this.
127.0.0.1 www.naver.com
125.209.222.142 www.naver.com
And run httpClient.get() result like this.
DEBUG r.n.r.PooledConnectionProvider - [id:83c8069f] Created a new pooled channel, now: 0 active connections, 0 inactive connections and 0 pending acquire requests.
DEBUG r.n.t.SslProvider - [id:83c8069f] SSL enabled using engine sun.security.ssl.SSLEngineImpl#5412b204 and SNI www.naver.com:443
DEBUG i.n.b.AbstractByteBuf - -Dio.netty.buffer.checkAccessible: true
DEBUG i.n.b.AbstractByteBuf - -Dio.netty.buffer.checkBounds: true
DEBUG i.n.u.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector#7ec1e7a5
DEBUG r.n.t.TransportConfig - [id:83c8069f] Initialized pipeline DefaultChannelPipeline{(reactor.left.sslHandler = io.netty.handler.ssl.SslHandler), (reactor.left.loggingHandler = reactor.netty.transport.logging.ReactorNettyLoggingHandler), (reactor.left.sslReader = reactor.netty.tcp.SslProvider$SslReadHandler), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpClientCodec), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
INFO reactor - [id:83c8069f] REGISTERED
DEBUG r.n.t.TransportConnector - [id:83c8069f] Connecting to [www.naver.com/127.0.0.1:443].
INFO reactor - [id:83c8069f] CONNECT: www.naver.com/127.0.0.1:443
INFO reactor - [id:83c8069f] CLOSE
DEBUG r.n.t.TransportConnector - [id:83c8069f] Connect attempt to [www.naver.com/127.0.0.1:443] failed.
io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: no further information: www.naver.com/127.0.0.1:443
Caused by: java.net.ConnectException: Connection refused: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:707)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
DEBUG r.n.r.PooledConnectionProvider - [id:0e3c272c] Created a new pooled channel, now: 0 active connections, 0 inactive connections and 0 pending acquire requests.
DEBUG r.n.t.SslProvider - [id:0e3c272c] SSL enabled using engine sun.security.ssl.SSLEngineImpl#47b241b and SNI www.naver.com:443
DEBUG r.n.t.TransportConfig - [id:0e3c272c] Initialized pipeline DefaultChannelPipeline{(reactor.left.sslHandler = io.netty.handler.ssl.SslHandler), (reactor.left.loggingHandler = reactor.netty.transport.logging.ReactorNettyLoggingHandler), (reactor.left.sslReader = reactor.netty.tcp.SslProvider$SslReadHandler), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpClientCodec), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
INFO reactor - [id:0e3c272c] REGISTERED
INFO reactor - [id:83c8069f] UNREGISTERED
DEBUG r.n.t.TransportConnector - [id:0e3c272c] Connecting to [www.naver.com/125.209.222.142:443].
INFO reactor - [id:0e3c272c] CONNECT: www.naver.com/125.209.222.142:443
DEBUG r.n.r.DefaultPooledConnectionProvider - [id:0e3c272c, L:/192.168.55.77:52910 - R:www.naver.com/125.209.222.142:443] Registering pool release on close event for channel
DEBUG r.n.r.PooledConnectionProvider - [id:0e3c272c, L:/192.168.55.77:52910 - R:www.naver.com/125.209.222.142:443] Channel connected, now: 1 active connections, 0 inactive connections and 0 pending acquire requests.
DEBUG i.n.u.Recycler - -Dio.netty.recycler.maxCapacityPerThread: 4096
DEBUG i.n.u.Recycler - -Dio.netty.recycler.maxSharedCapacityFactor: 2
DEBUG i.n.u.Recycler - -Dio.netty.recycler.linkCapacity: 16
DEBUG i.n.u.Recycler - -Dio.netty.recycler.ratio: 8
DEBUG i.n.u.Recycler - -Dio.netty.recycler.delayedQueue.ratio: 8
INFO reactor - [id:0e3c272c, L:/192.168.55.77:52910 - R:www.naver.com/125.209.222.142:443] ACTIVE
see that. I don't want connect twice and don't spend more time to connect.
just fail if can not make connect.
Can I avoid this situation?
This is managing under the issue.
https://github.com/reactor/reactor-netty/issues/1822

Accept retrieving less fields than requested in MARS Web API?

I'm trying to download a 25 day ahead forecast from the ECMWF MARS Web API for all of 2018. These forecasts (WAEF Control Forecast) are only published on mondays and thursdays, and here I'm running into problems fetching the data using the MARS Web API.
I tried requesting the intuitive 2018-01-01/to/2018-12-31, but since there are 5 days a week where there aren't any fields to retrieve, the request fails.
My MARS request file is as follows:
retrieve,
class=od,
date=2018-01-01/to/2018-12-31,
expver=1,
param=229.140/245.140,
step=600/624/648/672,
stream=waef,
time=00:00:00,
type=cf,
target="output.grib"
Which results in the following response:
...
mars - INFO - 20190215.100826 - Welcome to MARS
mars - INFO - 20190215.100826 - MARS Client build stamp: 20190130224336
mars - INFO - 20190215.100826 - MARS Client version: 6.23.3
mars - INFO - 20190215.100826 - MIR version: 1.1.2
mars - INFO - 20190215.100826 - Using ecCodes version 2.10.1
mars - INFO - 20190215.100826 - Using odb_api version: 0.15.9 (file format version: 0.5)
mars - INFO - 20190215.100826 - Maximum retrieval size is 30.00 G
retrieve,target="output.grib",stream=waef,param=229.140/245.140,padding=0,step=600/624/648/672,expver=1,time=00:00:00,date=2018-01-01/to/2018-12-31,type=cf,class=odmars - WARN - 20190215.100826 - For wave data, LEVTYPE forced to Surface
mars - INFO - 20190215.100826 - Automatic split by date is on
mars - INFO - 20190215.100826 - Request has been split into 12 monthly retrievals
mars - INFO - 20190215.100826 - Processing request 1
RETRIEVE,
CLASS = OD,
TYPE = CF,
STREAM = WAEF,
EXPVER = 0001,
REPRES = SH,
LEVTYPE = SFC,
PARAM = 229.140/245.140,
TIME = 0000,
STEP = 600/624/648/672,
DOMAIN = G,
TARGET = "output.grib",
PADDING = 0,
DATE = 20180101/20180102/20180103/20180104/20180105/20180106/20180107/20180108/20180109/20180110/20180111/20180112/20180113/20180114/20180115/20180116/20180117/20180118/20180119/20180120/20180121/20180122/20180123/20180124/20180125/20180126/20180127/20180128/20180129/20180130/20180131
mars - INFO - 20190215.100826 - Web API request id: xxx
mars - INFO - 20190215.100826 - Requesting 248 fields
mars - INFO - 20190215.100826 - Calling mars on 'marsod', callback on 36551
mars - INFO - 20190215.100827 - Server task is 228 [marsod]
mars - INFO - 20190215.100827 - Request cost: 72 fields, 17.2754 Mbytes on 1 tape, nodes: hpss [marsod]
2019-02-15 11:08:59 Request is active
mars - INFO - 20190215.102300 - Transfering 18114554 bytes
mars - WARN - 20190215.102301 - Visiting database marsod : expected 248, got 72
mars - ERROR - 20190215.102301 - Expected 248, got 72.
mars - ERROR - 20190215.102301 - Request failed
...
Is there any way to allow receiving less fields than requested or any other elegant solution to this problem other than only requesting the correct dates for mondays and thursdays?
I managed to find the answer in the MARS documentation after all. Using expect = any in the control section solved the issue. More information can be found here: https://confluence.ecmwf.int/pages/viewpage.action?pageId=43521134
retrieve,
class=od,
date=2018-01-01/to/2018-12-31,
expver=1,
param=229.140/245.140,
step=600/624/648/672,
stream=waef,
time=00:00:00,
type=cf,
expect=any,
target="output.grib"

debugging dask - failed to detect client

My dask groupby script is failing (Memory Error) so I set off to debug the script. I'm running on a stand-alone computer .
I've updated the logging in the config.yaml file with
logging:
distributed: debug
bokeh: debug
tornado: info
I'm running the dask distributed example computation with the following updates:
from distributed import LocalCluster
c=LocalCluster()
and an updated script with the range amended to range(10000) .
While the script runs (and successfully completes) I see the following logs in the Jupyter Notebook which repeats itself until the script completes.
bokeh.server.tornado - DEBUG - [pid 3088] 0 clients connected
bokeh.server.tornado - DEBUG - [pid 3088] /system has 0 sessions
with 0 unused bokeh.server.tornado - DEBUG - [pid 3088] /stealing
has 0 sessions with 0 unused bokeh.server.tornado - DEBUG - [pid 3088]
/workers has 0 sessions with 0 unused bokeh.server.tornado - DEBUG -
[pid 3088] /events has 0 sessions with 0 unused bokeh.server.tornado
- DEBUG - [pid 3088] /counters has 0 sessions with 0 unused bokeh.server.tornado - DEBUG - [pid 3088] /tasks has 0 sessions with
0 unused bokeh.server.tornado - DEBUG - [pid 3088] /status has 0
sessions with 0 unused
Why don't I see any workers that are running?
I get the following log when running the c = LocalCluster()
bokeh.server.server - INFO - Starting Bokeh server version 0.12.4
bokeh.server.server - WARNING - Host wildcard '' can expose the
application to HTTP host header attacks. Host wildcard should only be
used for testing purpose. bokeh.server.server - WARNING - Host
wildcard '' can expose the application to HTTP host header attacks.
Host wildcard should only be used for testing purpose.
bokeh.server.tornado - DEBUG - Allowed Host headers: ['']
bokeh.server.tornado - DEBUG - These host origins can connect to the
websocket: [''] bokeh.server.tornado - DEBUG - Patterns are:
bokeh.server.tornado - DEBUG - [('/system/?', bokeh.server.tornado -
DEBUG - ,
bokeh.server.tornado - DEBUG - {'application_context':
, bokeh.server.tornado - DEBUG -
'bokeh_websocket_path': '/system/ws'}), bokeh.server.tornado - DEBUG -
('/system/ws', bokeh.server.tornado - DEBUG - , bokeh.server.tornado - DEBUG -
{'application_context':
, bokeh.server.tornado - DEBUG -
'bokeh_websocket_path': '/system/ws'}), bokeh.server.tornado - DEBUG -
('/system/autoload.js', bokeh.server.tornado - DEBUG - ,
bokeh.server.tornado - DEBUG - {'application_context':
, bokeh.server.tornado - DEBUG -
'bokeh_websocket_path': '/system/ws'}), bokeh.server.tornado - DEBUG -
('/stealing/?', bokeh.server.tornado - DEBUG - , bokeh.server.tornado -
DEBUG - {'application_context':
, bokeh.server.tornado - DEBUG -
'bokeh_websocket_path': '/stealing/ws'}), bokeh.server.tornado - DEBUG
- ('/stealing/ws', bokeh.server.tornado - DEBUG - , bokeh.server.tornado - DEBUG -
{'application_context':
, bokeh.server.tornado - DEBUG -
'bokeh_websocket_path': '/stealing/ws'}), bokeh.server.tornado - DEBUG
- ('/stealing/autoload.js', bokeh.server.tornado - DEBUG - ,
bokeh.server.tornado - DEBUG - {'application_context':
, bokeh.server.tornado - DEBUG -
'bokeh_websocket_path': '/stealing/ws'}), bokeh.server.tornado - DEBUG
- ('/workers/?', bokeh.server.tornado - DEBUG - , bokeh.server.tornado -
DEBUG - {'application_context':
, bokeh.server.tornado - DEBUG -
'bokeh_websocket_path': '/workers/ws'}), bokeh.server.tornado - DEBUG
- ('/workers/ws', bokeh.server.tornado - DEBUG - , bokeh.server.tornado - DEBUG -
{'application_context':
, bokeh.server.tornado - DEBUG -
'bokeh_websocket_path': '/workers/ws'}), bokeh.server.tornado - DEBUG
- ('/workers/autoload.js', bokeh.server.tornado - DEBUG - ,
bokeh.server.tornado - DEBUG - {'application_context':
, bokeh.server.tornado - DEBUG -
'bokeh_websocket_path': '/workers/ws'}), bokeh.server.tornado - DEBUG
- ('/events/?', bokeh.server.tornado - DEBUG - , bokeh.server.tornado -
DEBUG - {'application_context':
, bokeh.server.tornado - DEBUG -
'bokeh_websocket_path': '/events/ws'}), bokeh.server.tornado - DEBUG -
('/events/ws', bokeh.server.tornado - DEBUG - , bokeh.server.tornado - DEBUG -
{'application_context':
, bokeh.server.tornado - DEBUG -
'bokeh_websocket_path': '/events/ws'}), bokeh.server.tornado - DEBUG -
('/events/autoload.js', bokeh.server.tornado - DEBUG - ,
bokeh.server.tornado - DEBUG - {'application_context':
, bokeh.server.tornado - DEBUG -
'bokeh_websocket_path': '/events/ws'}), bokeh.server.tornado - DEBUG -
('/counters/?', bokeh.server.tornado - DEBUG - , bokeh.server.tornado -
DEBUG - {'application_context':
, bokeh.server.tornado - DEBUG -
'bokeh_websocket_path': '/counters/ws'}), bokeh.server.tornado - DEBUG
- ('/counters/ws', bokeh.server.tornado - DEBUG - , bokeh.server.tornado - DEBUG -
{'application_context':
, bokeh.server.tornado - DEBUG -
'bokeh_websocket_path': '/counters/ws'}), bokeh.server.tornado - DEBUG
- ('/counters/autoload.js', bokeh.server.tornado - DEBUG - ,
bokeh.server.tornado - DEBUG - {'application_context':
, bokeh.server.tornado - DEBUG -
'bokeh_websocket_path': '/counters/ws'}), bokeh.server.tornado - DEBUG
- ('/tasks/?', bokeh.server.tornado - DEBUG - , bokeh.server.tornado -
DEBUG - {'application_context':
, bokeh.server.tornado - DEBUG -
'bokeh_websocket_path': '/tasks/ws'}), bokeh.server.tornado - DEBUG -
('/tasks/ws', bokeh.server.tornado - DEBUG - , bokeh.server.tornado - DEBUG -
{'application_context':
, bokeh.server.tornado - DEBUG -
'bokeh_websocket_path': '/tasks/ws'}), bokeh.server.tornado - DEBUG -
('/tasks/autoload.js', bokeh.server.tornado - DEBUG - ,
bokeh.server.tornado - DEBUG - {'application_context':
, bokeh.server.tornado - DEBUG -
'bokeh_websocket_path': '/tasks/ws'}), bokeh.server.tornado - DEBUG -
('/status/?', bokeh.server.tornado - DEBUG - , bokeh.server.tornado -
DEBUG - {'application_context':
, bokeh.server.tornado - DEBUG -
'bokeh_websocket_path': '/status/ws'}), bokeh.server.tornado - DEBUG -
('/status/ws', bokeh.server.tornado - DEBUG - , bokeh.server.tornado - DEBUG -
{'application_context':
, bokeh.server.tornado - DEBUG -
'bokeh_websocket_path': '/status/ws'}), bokeh.server.tornado - DEBUG -
('/status/autoload.js', bokeh.server.tornado - DEBUG - ,
bokeh.server.tornado - DEBUG - {'application_context':
, bokeh.server.tornado - DEBUG -
'bokeh_websocket_path': '/status/ws'}), bokeh.server.tornado - DEBUG -
('/?', bokeh.server.tornado - DEBUG - , bokeh.server.tornado -
DEBUG - {'applications': {'/counters':
, bokeh.server.tornado - DEBUG -
'/events': , bokeh.server.tornado - DEBUG -
'/status': , bokeh.server.tornado - DEBUG -
'/stealing': , bokeh.server.tornado - DEBUG -
'/system': , bokeh.server.tornado - DEBUG -
'/tasks': , bokeh.server.tornado - DEBUG -
'/workers': }, bokeh.server.tornado - DEBUG -
'prefix': '', bokeh.server.tornado - DEBUG - 'use_redirect':
True}), bokeh.server.tornado - DEBUG - ('/static/(.*)',
bokeh.server.tornado - DEBUG - )]
The logging messages that you're seeing are from the Bokeh diagnostic dashboard. They're essentially saying that no one is looking at the diagnostic webpage. You might want to decrease the verbosity of your Bokeh logging. I don't think that these messages concern you.
Debug level logging is almost always too verbose for any user. It tends to be used by developers when debugging.

jersey+springboot can't response correct http status [duplicate]

I've encountered the same issue as in this question, using Spring Boot 1.3.0 and not having my controllers annotated with #RestController, just #Path and #Service. As the OP in that question says,
this is, to me, anything but sensible
I also can't understand why would they have it redirect to /error. And it is very likely that I'm missing something, because I can only give back 404s or 200s to the client.
My problem is that his solution doesn't seem to work with 1.3.0, so I have the following request flow: let's say my code throws a NullPointerException. It'll be handled by one of my ExceptionMappers
#Provider
public class GeneralExceptionMapper implements ExceptionMapper<Throwable> {
private static final Logger LOGGER = LoggerFactory.getLogger(GeneralExceptionMapper.class);
#Override
public Response toResponse(Throwable exception) {
LOGGER.error(exception.getLocalizedMessage());
return Response.status(Response.Status.INTERNAL_SERVER_ERROR).build();
}
}
And my code returns a 500, but instead of sending it back to the client, it tries to redirect it to /error. If I don't have another resource for that, it'll send back a 404.
2015-12-16 18:33:21.268 INFO 9708 --- [nio-8080-exec-1] o.glassfish.jersey.filter.LoggingFilter : 1 * Server has received a request on thread http-nio-8080-exec-1
1 > GET http://localhost:8080/nullpointerexception
1 > accept: */*
1 > host: localhost:8080
1 > user-agent: curl/7.45.0
2015-12-16 18:33:29.492 INFO 9708 --- [nio-8080-exec-1] o.glassfish.jersey.filter.LoggingFilter : 1 * Server responded with a response on thread http-nio-8080-exec-1
1 < 500
2015-12-16 18:33:29.540 INFO 9708 --- [nio-8080-exec-1] o.glassfish.jersey.filter.LoggingFilter : 2 * Server has received a request on thread http-nio-8080-exec-1
2 > GET http://localhost:8080/error
2 > accept: */*
2 > host: localhost:8080
2 > user-agent: curl/7.45.0
2015-12-16 18:33:37.249 INFO 9708 --- [nio-8080-exec-1] o.glassfish.jersey.filter.LoggingFilter : 2 * Server responded with a response on thread http-nio-8080-exec-1
2 < 404
And client's side (curl):
$ curl -v http://localhost:8080/nullpointerexception
* STATE: INIT => CONNECT handle 0x6000572d0; line 1090 (connection #-5000)
* Added connection 0. The cache now contains 1 members
* Trying ::1...
* STATE: CONNECT => WAITCONNECT handle 0x6000572d0; line 1143 (connection #0)
* Connected to localhost (::1) port 8080 (#0)
* STATE: WAITCONNECT => SENDPROTOCONNECT handle 0x6000572d0; line 1240 (connection #0)
* STATE: SENDPROTOCONNECT => DO handle 0x6000572d0; line 1258 (connection #0)
> GET /nullpointerexception HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.45.0
> Accept: */*
>
* STATE: DO => DO_DONE handle 0x6000572d0; line 1337 (connection #0)
* STATE: DO_DONE => WAITPERFORM handle 0x6000572d0; line 1464 (connection #0)
* STATE: WAITPERFORM => PERFORM handle 0x6000572d0; line 1474 (connection #0)
* HTTP 1.1 or later with persistent connection, pipelining supported
< HTTP/1.1 404 Not Found
* Server Apache-Coyote/1.1 is not blacklisted
< Server: Apache-Coyote/1.1
< Content-Length: 0
< Date: Wed, 16 Dec 2015 17:33:37 GMT
<
* STATE: PERFORM => DONE handle 0x6000572d0; line 1632 (connection #0)
* Curl_done
* Connection #0 to host localhost left intact
So it's always a 404. Unless I do have such an /error resource, then what? what am I supposed to return? All I have at that point is a GET request to /error. And I don't want those extra requests consuming resources and polluting my logs.
What am I missing? And if nothing, what should I do with my exception handling?
You can set the Jersey property ServerProperties.RESPONSE_SET_STATUS_OVER_SEND_ERROR to true.
Whenever response status is 4xx or 5xx it is possible to choose between sendError or setStatus on container specific Response implementation. E.g. on servlet container Jersey can call HttpServletResponse.setStatus(...) or HttpServletResponse.sendError(...).
Calling sendError(...) method usually resets entity, response headers and provide error page for specified status code (e.g. servlet error-page configuration). However if you want to post-process response (e.g. by servlet filter) the only way to do it is calling setStatus(...) on container Response object.
If property value is true the method Response.setStatus(...) is used over default Response.sendError(...).
Type of the property value is boolean. The default value is false.
You can set Jersey property simply by calling property(key, value) in your ResourceConfig subclass constructor.

Resources