LittleProxy clientToProxyRequest Called twice - little-proxy

I configured LitteProxy as a reverse proxy and trying to implement a filter. When I send a HTTP GET , the clientToProxyRequest filter adapter is called twice. Once with DefaultHttpRequest as httpObject and the second time with LastHttpContent. It can be something to do with chunks ? How do I do this right ?
HttpProxyServerBootstrap bs = DefaultHttpProxyServer.bootstrap();
//reverse proxy
bs.withAllowRequestToOriginServer(true);
Filter is created as :
new HttpFiltersAdapter(originalRequest) {
#Override
public HttpResponse clientToProxyRequest(HttpObject httpObject) {
//doing filtering here..
return null;
}
}

Related

Spring Boot RSocket send a message within a Message Mapping

Staring with the tutorial code at benwilcock/spring-rsocket-demo I am trying to write a server that replicates messages to a second server before responding to a client.
To try to debug my issues I am only attempting a trivial ping-pong exchange between servers. Only when the second server responds to the pong message should the first server reply to the client:
#MessageMapping("request-response")
Mono<Message> requestResponse(final Message request) {
// register a mono that will be completed when replication to another server has happened
String uuid = UUID.randomUUID().toString();
Mono<Message> deferred = Mono.create(sink -> replicationNexus.registerRequest(uuid, sink));
// FIXME attempt to send a nested request-response message that will complete the outer message later
this.requesterMono.flatMap(requester -> requester.route("pong")
.data(uuid)
.retrieveMono(String.class))
.subscribeOn(Schedulers.elastic())
.subscribe( uuid2 -> replicationNexus.complete(uuid2, new Message(SERVER, RESPONSE)));
// return the deferred work that will be completed by the pong response
return deferred;
}
That logic is trying to use this answer to create a connection to the second server that will reconnect:
this.requesterMono = builder.rsocketConnector(connector -> connector
.reconnect(Retry.fixedDelay(Integer.MAX_VALUE, Duration.ofSeconds(1))))
.connectTcp("localhost", otherPort).cache();
To complete the picture here is the trivial ping-pong logic:
#MessageMapping("pong")
public Mono<String> pong(String m) {
return Mono.just(m);
}
and here is the logic that holds the state of the outer client response that is completed when the other server responds:
public class ReplicationNexus<T> {
final Map<String, MonoSink<T>> requests = new ConcurrentHashMap<>();
public void registerRequest(String v, MonoSink<T> sink) {
requests.put(v, sink);
}
public boolean complete(String uuid, T message) {
Optional<MonoSink<T>> sink = Optional.of(requests.get(uuid));
if( sink.isPresent() ){
sink.get().success(message);
}
return sink.isPresent();
}
}
Debugging the second server it never runs the pong method. It seems that the first server does not actually send the inner request message.
What is the correct way to run an inner request-response exchange that completes an outer message exchange with automated reconnection logic?
Not sure if I'm missing some of the complexity of your question, but if the middle server is just activing like a proxy I'd start with the simplest case of chaining through the calls. I feel like I'm missing some nuance of the question, so let's work through that next.
#MessageMapping("runCommand")
suspend fun runCommandX(
request: CommandRequest,
): Mono<String> {
val uuid = UUID.randomUUID().toString()
return requesterMono
.flatMap { requester: RSocketRequester ->
requester.route("pong")
.data("TEST")
.retrieveMono(String::class.java)
}
.doOnSubscribe {
// register request with uuid
}
.doOnSuccess {
// register completion
}
.doOnError {
// register failure
}
}
Generally if you can avoid calling subscribe yourself in typical spring/reactive/rsocket code. You want the framework to do this for you.

Flux.create() not generating events

I'm trying to use Flux to generate asynchronous server sent events using Flux.create. When my client connects the request eventually times out with no event ever received. I hard-coded in an event to be sent by the Flux.create just to see data flow, but still nothing received client side.
#GetMapping(path = "/stream", headers = "Accept=*/*", consumes = MediaType.ALL_VALUE, produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<ServerSentEvent<PricingDTO>> getEventStream() {
final Flux<ServerSentEvent<PricingDTO>> flux = Flux.<ServerSentEvent<PricingDTO>>create(emitter -> {
final PricingDTO pricing = new PricingDTO();
pricing.setId(99L);
emitter.next(ServerSentEvent.builder(pricing).build());
});
return flux;
}
Client side (Angular) code:
const eventSource = new EventSource(url);
eventSource.onmessage = (event) => {
console.debug('Received event: ' + event);
const json = JSON.parse(event.data);
// Should be PricingDTO record here
};
eventSource.onerror = (error) => {
if (eventSource.readyState === EventSource.CLOSED) {
console.log('The stream has been closed by the server.');
eventSource.close();
} else {
console.log('Error here: ' + error);
}
};
I never see an event come through the EventSource. Eventually the request times out and I see the error: net::ERR_EMPTY_RESPONSE
I'm new to using WebFlux and I suspect I'm missing some initialization on the FluxStream before I return the Flux result. I have debugged and do see the request being received by my web service and the Flux object being returned. Any idea why I'm not receiving my events?
Your webflux code seems fine. I tested this with the following simplified example (without your custom classes).
#SpringBootApplication
#RestController
public class App {
public static void main(String[] args) {
SpringApplication.run(App.class, args);
}
#GetMapping(path = "/stream", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<String> getEventStream() {
return Flux.create(emitter -> emitter.next("hi").next("hi2"));
}
}
When connecting to the steam in chrome you get to see the events coming in just fine:
data:hi
data:hi2
the problem either lies in your accept header filter, or on the client side. You could ofcourse validate this by connecting to your stream in a browser (or better, a test)

Simultaneous http post request spring boot

Hi,
I have a list with size of 500k and I have to make a request to a server with hash parameters.
The server accepts JSON Array of 200 objects. so I could send 200 items each time.
But still I need to split the list each time and send that part to server.
I have a method for this which makes the http post request. and I want to use spring boot options (if available) to call the method with different threads and get the response back and merge them into one.
I did it using java CompletableFuture class without any springboot tags. but you Could use #async for your method too. sample code :
var futures = new ArrayList<CompletableFuture<List<Composite>>>();
var executor = Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors());
for (List<CompositeRecord> records : Lists.partition(recordsList, 200)) {
var future = CompletableFuture.supplyAsync(() -> /* call your method here */, executor);
futures.add(future);
Thread.sleep(2000);
}
CompletableFuture.allOf(futures.toArray(new CompletableFuture[0])).exceptionally(ex -> null).join(); // avoid throwing an exception in the join() call
var futureMap = futures.stream().collect(Collectors.partitioningBy(CompletableFuture::isCompletedExceptionally));
var compositeWithErrorList = new ArrayList<Composite>();
futureMap.get(false).forEach(l -> {
try {
compositeWithErrorList.addAll(l.get());
} catch (InterruptedException | ExecutionException e) {
e.printStackTrace();
}
});
after the code is executed you will have a map of done and undone futures.

How to get from a BulkItemResponse to corresponding Request

I'm using Elasticsearch's bulk requests in Java and I'm trying to handle the situations where some error happens:
BulkResponse bulkResponse = bulkRequest.get();
if (bulkResponse.hasFailures()) {
for (BulkItemResponse response : bulkResponse) {
if (response.isFailed()
|| response.getResponse().getShardInfo().getFailed() > 0) {
//Find the corresponding request and resend it
}
}
}
After finding the response with error, I want to re-send its request since in my case errors could be momentary and in most of the cases a retry could resolve the problem.
So my question is, how to get from BulkItemResponse to the original Request that led it? Is there any way better than relying on the order of requests and responses?
No you don't have that AFAIK. You need somehow to keep all elements you added to a bulk and in case of error use the id coming from response.getId() to associate that with the original data.
So something similar to:
HashMap<String, Object> myData = new HashMap<>();
BulkRequestBuilder brb = BulkAction.INSTANCE.newRequestBuilder(client);
myData.put(myObject.getId(), myObject);
brb.add(new IndexRequest("index", "type", myObject.getId()).source(myObject.toJson()));
// Other actions
BulkResponse response = client.bulk(brb.request()).get();
response.forEach(bulkItemResponse -> {
if (bulkItemResponse.isFailed()) {
Object objectToSendAgain = myData.get(bulkItemResponse.getId());
// Do what is needed with this object
}
});
myData = null;
I hope this helps.

Multiple connections on the controller service (Spring)

I have written a controller which takes as a input the domain name , crawls the whole site and gives back the result in JSON format
http://crawlmysite-tgugnani.rhcloud.com/getUrlCrawlData/www.google.com
This gives the data google
http://crawlmysite-tgugnani.rhcloud.com/getUrlCrawlData/www.yahoo.com
This gives data for yahoo
If I try to run these two URL's simultaneously, I see that I am getting the mixed data, and the results of one is affecting the another, even though I try to hit them from different machines.
Here is my controller
#RequestMapping("/getUrlCrawlData/{domain:.+}")
#ResponseBody
public String registerContact(#PathVariable("domain") String domain) throws HttpStatusException, SQLException, IOException {
List<URLdata> urldata = null;
Gson gson = new Gson();
String json;
urldata = crawlService.crawlURL("http://"+domain);
json = gson.toJson(urldata);
return json;
}
What do I need to do modify to allow many multiple independent connections.
Update
Following is my crawl Service
public List<URLdata> crawlURL(String domain) throws HttpStatusException, SQLException, IOException{
testDomain = domain;
urlList.clear();
urlMap.clear();
urldata.clear();
urlList.add(testDomain);
processPage(testDomain);
//Get all pages
for(int i = 1; i < urlList.size(); i++){
if(urlList.size()>=500){
break;
}
processPage(urlList.get(i));
//System.out.println(urlList.get(i));
}
//Calculate Time
for(int i = 0; i < urlList.size(); i++){
getTitleAndMeta(urlList.get(i));
}
return urldata;
}
public static void processPage(String URL) throws SQLException, IOException, HttpStatusException{
//get useful information
try{
Connection.Response response = Jsoup.connect(URL)
.userAgent("Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.21 (KHTML, like Gecko) Chrome/19.0.1042.0 Safari/535.21")
.timeout(10000)
.execute();
Document doc = response.parse();
//get all links and recursively call the processPage method
Elements questions = doc.select("a[href]");
for(Element link: questions){
String linkName = link.attr("abs:href");
if(linkName.contains(testDomain.replaceAll("http://www.", ""))){
if(linkName.contains("#")){
linkName = linkName.substring(0, linkName.indexOf("#"));
}
if(linkName.contains("?")){
linkName = linkName.substring(0, linkName.indexOf("?"));
}
if(!urlList.contains(linkName) && urlList.size() <= 500){
urlList.add(linkName);
}
}
}
}
catch(HttpStatusException e){
System.out.println(e);
}
catch(SocketTimeoutException e){
System.out.println(e);
}
catch(UnsupportedMimeTypeException e){
System.out.println(e);
}
catch(UnknownHostException e){
System.out.println(e);
}
catch(MalformedURLException e){
System.out.println(e);
}
}
Each of your requests (http://crawlmysite-tgugnani.rhcloud.com/getUrlCrawlData/www.google.com and http://crawlmysite-tgugnani.rhcloud.com/getUrlCrawlData/www.yahoo.com) is processed in a separate thread. You have two instances of the crawlURL() method working simultaneously, but both methods use the same variables (testDomain, urlList, urlMap and urldata). So they mess up each other's data in these variables.
One way to fix the problem is to declare these variables locally (inside the method). This way, new instances of these variables will be created for each invocation of crawlURL(). Alternatively, you can create a new instance of your CrawlService class for each invocation of the crawlURL() method.
Synchronizing threads would be a bad idea here because one requests will wait for another to complete before it can be processed by crawlURL().
As far as SpringMVC is concerned every request running in separate thread. So I think problem is in crawlService which, I suppose, is not stateless (singleton-like). Try to create new crawl service for every request and check if your data is not mixed. If creating crawl service is expensive operation you should rewrite it to work in stateless way.
#RequestMapping("/getUrlCrawlData/{domain:.+}")
#ResponseBody
public String registerContact(#PathVariable("domain") String domain) throws HttpStatusException, SQLException, IOException {
Gson gson = new Gson();
List<URLdata> = new CrawlService().crawlURL("http://"+domain);
return gson.toJson(urldata);
}
I think
urldata = crawlService.crawlURL("http://"+domain);
This call to crawl Service is the one which is affected by Multiple requests coming simultaneously.
check whether crawlService is safe from multithreading.
ie check whether crawlURL() method is synchronized , if not make it synchronized.
or else synchronize the block of calling crawlservice inside controller.

Resources