I am trying to debug my jersey 2 app on Payara 162, but on every request, after the trace information is printed I get this exception and the client gets no response:
org.glassfish.grizzly.http.util.MimeHeaders$MaxHeaderCountExceededException: Illegal attempt to exceed the configured maximum number of headers: 100
at org.glassfish.grizzly.http.util.MimeHeaders.createHeader(MimeHeaders.java:396)
at org.glassfish.grizzly.http.util.MimeHeaders.addValue(MimeHeaders.java:422)
at org.glassfish.grizzly.http.HttpHeader.addHeader(HttpHeader.java:707)
at org.glassfish.grizzly.http.server.Response.addHeader(Response.java:1177)
at org.apache.catalina.connector.Response.addHeader(Response.java:1221)
at org.apache.catalina.connector.ResponseFacade.addHeader(ResponseFacade.java:579)
at org.glassfish.jersey.servlet.internal.ResponseWriter.writeResponseStatusAndHeaders(ResponseWriter.java:165)
at org.glassfish.jersey.server.ServerRuntime$Responder$1.getOutputStream(ServerRuntime.java:701)
at org.glassfish.jersey.message.internal.CommittingOutputStream.commitStream(CommittingOutputStream.java:200)
at org.glassfish.jersey.message.internal.CommittingOutputStream.flushBuffer(CommittingOutputStream.java:305)
at org.glassfish.jersey.message.internal.CommittingOutputStream.commit(CommittingOutputStream.java:261)
at org.glassfish.jersey.message.internal.CommittingOutputStream.close(CommittingOutputStream.java:276)
at org.glassfish.jersey.message.internal.OutboundMessageContext.close(OutboundMessageContext.java:839)
at org.glassfish.jersey.server.ContainerResponse.close(ContainerResponse.java:412)
at org.glassfish.jersey.server.ServerRuntime$Responder.writeResponse(ServerRuntime.java:784)
at org.glassfish.jersey.server.ServerRuntime$Responder.processResponse(ServerRuntime.java:444)
at org.glassfish.jersey.server.ServerRuntime$Responder.process(ServerRuntime.java:434)
at org.glassfish.jersey.server.ServerRuntime$2.run(ServerRuntime.java:329)
In my jersey I app I configured trace as so:
public class RestApplication extends ResourceConfig {
public RestApplication() {
super();
packages(true, "com.example");
register(JacksonFeature.class);
register(JsonProvider.class);
register(RolesAllowedDynamicFeature.class);
property("jersey.config.server.tracing.type", "ON_DEMAND");
property("jersey.config.server.tracing.threshold", "VERBOSE");
}
}
I enabled the logger in my logback.xml (I have configured Payara to use logback), and I see the full trace info in my server log when I enable it on demand by adding the X-Jersey-Tracing-Accept header to my request, but then I get the exception. When I don't add the header to the request everything works but of course I don't get the trace.
I'm wondering if there is anything I can change to fix this or is it a bug?
The problem is that tracing adds a header into the REST response for each event.
Grizzly imposes a limit on number of headers in the response.
Payara Server by default defines 100 as maximum number of headers in the response. You need to increase this number to allow all tracing info in the response.
To set a higher number for maximum number of headers, you need to use asadmin. There is no option to set this in the GUI admin console, it is missing in the screen to configure the HTTP protocol.
If your configuration is named server-config and the network listener is http-listener-1, then execute the following asadmin command to set it to 1000:
asadmin> set
configs.config.server-config.network-config.protocols.protocol.http-listener-1.http.max-response-headers=1000
You can use a similar command to set all Grizzly network listener propertiesoptions, just replace max-response-headers to the name of the option you want to set, using - as a word separator instead of camel case.
Related
I wanted to analyze the improvement I may see by enabling Async Controllers in Spring Boot over normal controller
So here is my test code. One API returns a Callable and another is normal controller API. Both APIs block for 10secs simulating a long running task
#RequestMapping(value="/api/1",method=RequestMethod.GET)
public List<String> questions() throws InterruptedException{
Thread.sleep(10000);
return Arrays.asList("Question1","Question2");
}
#RequestMapping(value="/api/2",method=RequestMethod.GET)
public Callable<List<String>> questionsAsync(){
return () -> {
Thread.sleep(10000);
return Arrays.asList("Question2","Question2");
};
}
I set up embedded tomcat with this configuration i.e only one tomcat processing thread:
server.tomcat.max-threads=1
logging.level.org.springframework=debug
Expectations for /api/1
Since there is only one tomcat thread, another request will not be entertained untill this is processed after 10secs
Results:
Meet expectations
Expectations for /api/2
Since we are returning a callable immediately, the single tomcat thread should get free to process another request. Callable would internally start a new thread. So if you hit the same api it should also gets accepted.
Results:
This is not happening and untill the callable executes completely, no further request is entertained.
Question
Why is /api/2 not behaving as expected?
#DaveSyer is right, /api/2 is actually behaving as expected.
I assume you are testing the behavior with a web browser. At least Firefox and Chrome are preventing multiple simultaneous requests to the same URL. If you open 2 tabs with api/2, the second one will only send a request to the application after the first got the response.
Try testing it with a simple bash script, like:
curl localhost/api/2 &
curl localhost/api/2 &
curl localhost/api/2 &
It will print 3 responses around the same time.
Just want to mention that server.tomcat.max-threads is deprecated since Spring boot 2.3. Now use server.tomcat.threads.max in your Spring application.properties. The default is 200.
I am using Spring 3.1.1. I have a file upload feature and have implemented HandlerExceptionResolver to handle file size limit. I have configured the size in spring mvc xml with "maxUploadSize" attribute. My application uses spring security with open id.
I am using the code below in resolveException() of that upload controller. RestResponse is a pojo with error message and status
ModelAndView mav = new ModelAndView();
mav.setView(new MappingJacksonJsonView());
RestResponse fail_response = new RestResponse();
fail_response.setMessage("Max upload limit failure.");
fail_response.setStatus("FAIL");
mav.addObject("restResponse", fail_response);
return mav;
I can see from the logs that, the resolveException() is invoked, however the json response is not shown in the rest api client/browser. I can see the exception in the log (Maximum upload size of 10485760 bytes exceeded; nested exception is org.apache.commons.fileupload.FileUploadBase$SizeLimitExceededException: the request was rejected because its size (30611515) exceeds the configured maximum (10485760)) immediately after the upload, but the upload still continues and after a while the json error message (available above) is shown in the browser.
I did check for this problem in stackoverflow and other portals and could not find why it is not shown or response is returned to the UI.
See this issue and this SO. If still not clear, refer to this link
and search for maxFileSize and maxRequestSize. Add same settings in your application.yml
Is there any way to set fiddler to lookup gateway proxy (upstream proxy) from Advanced configuration instead of the common configuration? I have an application that sets multiple proxies for each protocol. So fiddler assumes there is no gateway unless it finds something in the box above.
Also is there any QuickExec command available for changing the Gateway? I'm looking for rapid way to set upstream proxy.
By default, the upstream gateway for each session is inherited from the IE/Windows default proxy setting that was set when Fiddler starts up.
However, on each session it can be overridden using the X-OverrideGateway Session Flag.
So, to build your own QuickExec action, do this:
Inside Rules > Customize Rules > Handlers, add
public static var m_GatewayOverride = null;
Inside OnBeforeRequest, add
if (null != m_GatewayOverride) { oSession["X-OverrideGateway"] = m_GatewayOverride;
Inside the OnExecAction method's switch statement, add
case "gw":
if (sParams.Length<2) {m_GatewayOverride = null; return;}
m_GatewayOverride = sParams[1]; FiddlerObject.StatusText="Set Gateway to " + m_GatewayOverride;
return true;
Then, you can type things like gw myProxy:1234 to force subsequent requests to myProxy:1234 or simply type gw to clear the override.
I'm struggling to run grizzly-websockets-chat. I've successfully compiled the sample. HttpServer.createSimpleServer is running and serving a test index.html on localhost:8080. WebSocketEngine.getEngine().register("/chat", chatApplication) executes without complaint. However, localhost:8080/chat returns "Resource identified by path '/chat', does not exist.". This is not under Glassfish - just standalone Grizzly/2.2.19.
Comments in some places suggest that websocket support is off by default - I'm unable to determine how to turn it on outside of Glassfish. I have only the test index.html in docroot.. is anything else required?
I'm not running anything special on the client side - no js, nothing. I've not seen any such thing in the sample. Surprisingly, I've not found a good doc or running example. Maybe is a user problem? ;/
Looks like websocket code may be being invoked:
$ java -jar ./tyrus-client-cli-1.1.jar ws://localhost:8080/chat
# Connecting to ws://localhost:8080/chat...
# Failed to connect to ws://localhost:8080/chat due to Handshake error
Any help much appreciated!
Change your request URI to ws://localhost:8080/grizzly-websockets-chat/chat.
The ChatApplication has the following defined for isApplicationRequest():
#Override
public boolean isApplicationRequest(HttpRequestPacket request) {
return "/grizzly-websockets-chat/chat".equals(request.getRequestURI());
}
I can enable tracing by replacing the default No Op trace writer.
GlobalConfiguration.Configuration.Services.Replace(typeof(ITraceWriter), new SimpleTracer());
But how can I disable (and enable it again) while the server is running?
The Enum "TraceLevel" has the option TraceLevel.Off.
Is there a way to set the tracing-framwork form web api to TraceLevel.Off?
Take a look at this article here.
WebApi does not provide any way to configure trace writers regarding their current category or level.
And
Determining which categories and levels are currently enabled is the
responsibility of the ITraceWriter implementation. The Web API
framework has no concept of trace configuration or determining which
categories and levels are enabled.
The intent of this approach is to rely on the ITraceWriter
implementation’s own mechanisms to configure what gets traced and
where.
For example, if you use an NLog-specific ITraceWriter, you should
configure tracing through NLog configuration mechanisms.
In short if you want to enable and disable your tracing or chose not to log a trace for a particular level it is up to you to implement inside your SimpleTracer.
You could create your own threadsafe singleton TraceConfiguration (akin to GlobalConfiguration) implementation to control the trace configuration to allow toggling on or off within the code.
public void Trace(HttpRequestMessage request, string category, TraceLevel level, Action<TraceRecord> traceAction)
{
if(TraceConfiguration.Configuration.IsEnabled)
{
//trace
}
}
Or you could set and access properties on the request object to determine whether to enable tracing or not. This could be set by ActionFilters on actions and controllers or even set within the controller.
public void Trace(HttpRequestMessage request, string category, TraceLevel level, Action<TraceRecord> traceAction)
{
if(request.Properties["SupressTracing"] as string != "true")
{
//trace
}
}
Or as the article suggests if you are using NLog or Log4Net etc it is up to you set the levels enabled via config etc.