Need sample of registering a gzip/deflate filter in grizzly http server - grizzly

I would like to see an example of registering a filter in grizzly http server for a specific url mapping (e.g. "/foo") that can handle gzip/deflate compressed payloads.
I am currently registering several HttpHandler instances to handle my different mappings, via something like this
server.getServerConfiguration().addHttpHandler(..., myContextPath)
Some of those mappings also need to support gzip/deflate compression. It is not clear to me how that could be done. Sample code or a pointer in the right direction would be appreciated.
Thanks in advance
Greg

Compression within Grizzly is based on response mime-types
You can enable compression on a per-HTTP-Network-Listener basis.
server.getListener("grizzly");
grizzly is the default listener name, you can access it using whatever name you've specified, or you can get all listeners via getListeners().
You can then call getCompressionConfig() on the listener instance which returns a CompressionConfig instance.

Related

how to use httpclient processor in Spring cloud Dataflow

I am trying out spring cloud dataflow. My specific usecase is to dump the response from a GET request to a log. I am trying to use the httpclient processor for this. But i dont understand how come it is a processor, and not a source. If it a processor, what should be the input source to it. Any example would do great.
It requires an incoming Message to trigger the http request. The message may specify the URL, HTTP Method, etc. using SpEL expressions but these may also be statically configured as well. For example, you can use the time source to trigger a request every second.

Development compromises in using Spring Cloud Stream

The case for event-driven microservices such as Spring Cloud Stream is their asynchronous nature, which I do agree it makes them more scalable
But I have an issue regarding how to code it in a way where I don't lose certain key features that I have access to using synchronous services
In a servlet-based MS, I make full use of servlet context variables and servlet-based Spring autowiring functions
For e.g., I leverage heavily on HTTP headers to carry metadata between microservices without having to impact the payload. But in Spring Cloud Stream using Kafka, Kafka doesn't support message headers of any kind! I lose that immediately if I use SCS. Putting them into the payload causes all sort of changes in my model classes if I define the attributes clearly. Yes, I can use a simple Hashmap to simulate the HTTP header object but it really seems like reinventing the wheel to me.
On the auto-wiring side: I maintain an audit log record per request, which I implement by declaring a request-scoped Hashmap bean and autowiring it into any methods in the Servlet's call stack that needs to append data to the audit log. Basically it's just a global variable to hold some data within a single request. But in SCS, again, I lose that cos bean scopes that leverage on servlets are not available.
So far, there seems to be a lot of trade-offs that I have to make just to make Spring Cloud Stream work for me.
I thought about an alternative approach where I use SCS just to create an entry point but the Source method would just get the event, use a Processor to construct a HTTP request and send the request along to a HTTP endpoint. But, why go through all that trouble then?
Hoping that some more experienced devs would be able to shed some light on how they leverage on SCS.
#feicipet Thanks for the detailed question. let me try to address some of your concerns in the order you have listed them:
+1
+1
I am not sure why you are referring to it as servlet-based instead of Spring-based? Those are features provided by Spring, but read on. . .
Spring Cloud Stream doesn't use Kafka, the end user does while Spring Cloud Stream provides Kafka binder allowing Spring Cloud Stream to integrate with Kafka. Further more, while Kafka indeed did not support headers prior to version 0.11, Spring Cloud Stream always supported and will continue support headers even with Kafka pre-0.11, embedding them in the Message and then extracting them in the consumer side into the proper Message headers completely transparent to the end user. In other words one would assume that Kafka did support headers by simply using Spring Cloud Stream. With Kafka 0.11+ headers are supported natively and we have adjusted to that with the same level of transparency.
So, you don't need to put anything in the payload. Just create an appropriate Message<payload, headers> and SCSt will take care of the rest regardless of the broker (Kafka, Rabbit, Foo etc.).
Yes you do simply due to the fact that as you eluded earlier SCSt promotes an asynchronous and stateless architecture. However, I do not agree that what you are trying to accomplish is un-accomplishable. Rather it is accomplishable the way you are describing, but there are other way to maintain context and I would be more then glad to discuss it as a separate topic.
I would not call them trade-offs, rather difference in the architecture, that has its benefits, but it is a not one-size-fits-all architecture and therefore its viability should be discussed within the context of a concrete use case.
+1. You don't have to separate it as Source and Processor. You can simply create a custom Source app with exposed REST endpoint and custom processing logic. However we are currently working on enhancements i the framework to ensure that you could do the same with the existing starter apps.
Obviously we have touched on many points here and some of them would probably need to be debated further, but I hope this clears up some of your concerns.
Cheers

Jax rs client pool

I am working on setting up a REST Client using jax-rs 2 client API.
In the api doc it says "Clients are heavy-weight objects that manage the client-side communication infrastructure. Initialization as well as disposal of a Client instance may be a rather expensive operation. It is therefore advised to construct only a small number of Client instances in the application." (https://docs.oracle.com/javaee/7/api/javax/ws/rs/client/Client.html). As per this statement it sounds like Client is not thread-safe and i should not be using single Client instance for all requests.
I am using CXF implementation, so far i didn't find a way to set up pool for Client objects.
If anyone has any information reg this could you please share.
Thanks in advance.
By default, CXF uses a transport based on the in-JDK HttpURLConnection object to perform HTTP requests.
Connection pooling is performed allowing persistent connections to reuse the underlying socket connection for multiple http requests.
Set these system properties to configure the pool(default values)
http.keepalive=true
http.maxConnections=5
Increment the value of http.maxConnections to set the maximum number of idle connections that will be simultaneously kept alive, per destination. See in this link the complete list of properties properties.html
In this post are explained some detail how it works
Java HttpURLConnection and pooling
Note also that the default JAX-RS client is not thread-safe by default. Check the limitations for proper use here
When you need many requests executed simultaneosly CXF can also use the asynchronous apache HttpAsyncClient. Ser details here
http://cxf.apache.org/docs/asynchronous-client-http-transport.html

Using gzip to compress rest response

I have a rest service which is exposed to end user. This service sends response(say 500K) and the response times are high.
How can we use Gzip to compress the response and send.
I'm using Spring boot and maven.
Thanks in advance.
The GZIP configuration was updated in Spring Boot release 1.3. Now, the correct handling is to use server.compression.enabled property.
server.compression.enabled=true
See the Enable HTTP response compression chapter in the reference docs for configuration details such as response size and content type. Please read the Release Notes for the motivations of this change.
Spring boot allows you to simply configure tomcat to use compression via your application.properties/yaml
server.tomcat.compression: on
(for more options see http://docs.spring.io/spring-boot/docs/1.2.3.RELEASE/reference/htmlsingle/#how-to-enable-http-response-compression)
Look at your container. Many will do this for you transparently at the server level, you shouldn't have to do anything at all with your code.
Consider this Tomcat example.

How to create a dynamic dispatch client using apache CXF

I want to use apache CXF to build my client. Unfortunately, I do not see a way by which it allows me to dispatch a client dynamically based on the port and operation name. If there is a huge wsdl, JaxWsDynamicClientFactory would create classes for all services contained in it which is an overhead that I'd like to avoid.
I found a similar implementation in JAX-WS. Is there any api in CXF that would do the same?
CXF supports the JAX-WS Dispatch API, which is a low-level interface to SOAP.
That means you can create a Dispatch that represents a particular port-type on a service and then invoke the methods by building the message
// Set things up...
Service s = ...
Dispatch<DOMSource> dispatch =
s.createDispatch(portName, // << a QName!
DOMSource.class, Service.Mode.PAYLOAD);
// Construct the request message here
Node response = dispatch.invoke(new DOMSource(request)).getNode();
// Understand the response message here
Of course, that then means you've got to work with the DOM for the messages, which is highly annoying. I think that's the part of tooling that's really worthwhile.

Resources