We are working with project reactor and having a huge problem right now. This is how we produce (publish our data):
public Flux<String> getAllFlux() {
return Flux.<String>create(sink -> {
new Thread(){
public void run(){
Iterator<Cache.Entry<String, MyObject>> iterator = getAllIterator();
ObjectMapper mapper = new ObjectMapper();
while(iterator.hasNext()) {
try {
sink.next(mapper.writeValueAsString(iterator.next().getValue()));
} catch (IOException e) {
e.printStackTrace();
}
}
sink.complete();
}
} .start();
});
}
As you can see we are taking data from an iterator and are publishing each item in that iterator as a json string. Our subscriber does the following:
flux.subscribe(new Subscriber<String>() {
private Subscription s;
int amount = 1; // the amount of received flux payload at a time
int onNextAmount;
String completeItem="";
ObjectMapper mapper = new ObjectMapper();
#Override
public void onSubscribe(Subscription s) {
System.out.println("subscribe");
this.s = s;
this.s.request(amount);
}
#Override
public void onNext(String item) {
MyObject myObject = null;
try {
System.out.println(item);
myObject = mapper.readValue(completeItem, MyObject.class);
System.out.println(myObject.toString());
} catch (IOException e) {
System.out.println(item);
System.out.println("failed: " + e.getLocalizedMessage());
}
onNextAmount++;
if (onNextAmount % amount == 0) {
this.s.request(amount);
}
}
#Override
public void onError(Throwable t) {
System.out.println(t.getLocalizedMessage())
}
#Override
public void onComplete() {
System.out.println("completed");
});
}
As you can see we are simply printing the String item which we receive and parsing it into an object using jackson wrapper. The problem we got now is that for most of our items everything works fine:
{"itemId": "someId", "itemDesc", "some description"}
But for some items the String is cut off like this for example:
{"itemId": "some"
And the next item after that would be
"Id", "itemDesc", "some description"}
There is no pattern for those cuts. It is completely random and it is different everytime we run that code. Ofcourse our jackson is gettin an error Unexpected end of Input with that behaviour.
So what is causing such a behaviour and how can we solve it?
Solution:
Send the Object inside the flux instead of the String:
public Flux<ItemIgnite> getAllFlux() {
return Flux.create(sink -> {
new Thread(){
public void run(){
Iterator<Cache.Entry<String, ItemIgnite>> iterator = getAllIterator();
while(iterator.hasNext()) {
sink.next(iterator.next().getValue());
}
}
} .start();
});
}
and use the following produces type:
#RequestMapping(value="/allFlux", method=RequestMethod.GET, produces="application/stream+json")
The key here is to use stream+json and not only json.
Related
I hope to do a parallel processing of two processors (fetching different info from different sources) then when both completed, i wanted to have access to both output for further processing (e.g. comparisons).
Something of sort:
from("direct:start)
.processor("process1")
.processor("process2")
.to("direct:compare");
Except I need both output from process1 and process2 to be available in "compare" endpoint.
This is one way to achieve using multicast and aggregation strategy,
public class App {
public static void main(String[] args) throws Exception {
CamelContext context = new DefaultCamelContext();
context.addRoutes(myRoute());
context.startRoute("start");
context.start();
ProducerTemplate producerTemplate = context.createProducerTemplate();
producerTemplate.sendBody("direct:start", null);
Thread.sleep(10_000);
context.stop();
}
private static RouteBuilder myRoute() {
return new RouteBuilder() {
#Override
public void configure() throws Exception {
from("direct:start").routeId("start")
.multicast(new MyAggregationStrategy())
.parallelProcessing()
.to("direct:process1", "direct:process2", "direct:process3")
.end()
.to("direct:endgame");
from("direct:process1")
.process(e -> {
ArrayList<String> body = Lists.newArrayList("a", "b", "c");
e.getIn().setBody(body);
});
from("direct:process2")
.process(e -> {
ArrayList<String> body = Lists.newArrayList("1", "2", "3");
e.getIn().setBody(body);
});
from("direct:process3")
.process(e -> {
ArrayList<String> body = Lists.newArrayList("#", "#", "$");
e.getIn().setBody(body);
});
from("direct:endgame")
.process(e -> {
log.info(" This final result : " + e.getIn().getBody());
});
}
};
}
}
//This is where we can aggregate results of the process which is running in parallel
class MyAggregationStrategy implements AggregationStrategy {
#Override
public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
ArrayList<Object> objects = Lists.newArrayList();
if (oldExchange == null) {
return newExchange;
}
Object o = oldExchange.getIn().getBody();
Object n = newExchange.getIn().getBody();
objects.add(o);
objects.add(n);
newExchange.getIn().setBody(objects);
return newExchange;
}
}
I have wasted several hours trying to solve a issue with the use of netty's channel pool map and a jax rs client.
I have used jersey's own netty connector as an inspiration but exchanged netty's channel with netty's channel pool map.
https://jersey.github.io/apidocs/2.27/jersey/org/glassfish/jersey/netty/connector/NettyConnectorProvider.html
My problem is that I have references that I need inside my custom SimpleChannelInboundHandler. However by the design of netty's way to create a channel pool map, I can not pass the references through my custom ChannelPoolHandler, because as soon as the pool map has created a pool the constructor of the channel pool handler never runs again.
This is the method where it makes acquires a pool and check out a channel to make a HTTP request.
#Override
public Future<?> apply(ClientRequest request, AsyncConnectorCallback callback) {
final CompletableFuture<Object> completableFuture = new CompletableFuture<>();
try{
HttpRequest httpRequest = buildHttpRequest(request);
// guard against prematurely closed channel
final GenericFutureListener<io.netty.util.concurrent.Future<? super Void>> closeListener =
future -> {
if (!completableFuture.isDone()) {
completableFuture.completeExceptionally(new IOException("Channel closed."));
}
};
try {
ClientRequestDTO clientRequestDTO = new ClientRequestDTO(NettyChannelPoolConnector.this, request, completableFuture, callback);
dtoMap.putIfAbsent(request.getUri(), clientRequestDTO);
// Retrieves a channel pool for the given host
FixedChannelPool pool = this.poolMap.get(clientRequestDTO);
// Acquire a new channel from the pool
io.netty.util.concurrent.Future<Channel> f = pool.acquire();
f.addListener((FutureListener<Channel>) futureWrite -> {
//Succeeded with acquiring a channel
if (futureWrite.isSuccess()) {
Channel channel = futureWrite.getNow();
channel.closeFuture().addListener(closeListener);
try {
if(request.hasEntity()) {
channel.writeAndFlush(httpRequest);
final JerseyChunkedInput jerseyChunkedInput = new JerseyChunkedInput(channel);
request.setStreamProvider(contentLength -> jerseyChunkedInput);
if(HttpUtil.isTransferEncodingChunked(httpRequest)) {
channel.write(jerseyChunkedInput);
} else {
channel.write(jerseyChunkedInput);
}
executorService.execute(() -> {
channel.closeFuture().removeListener(closeListener);
try {
request.writeEntity();
} catch (IOException ex) {
callback.failure(ex);
completableFuture.completeExceptionally(ex);
}
});
channel.flush();
} else {
channel.closeFuture().removeListener(closeListener);
channel.writeAndFlush(httpRequest);
}
} catch (Exception ex) {
System.err.println("Failed to sync and flush http request" + ex.getLocalizedMessage());
}
pool.release(channel);
}
});
} catch (NullPointerException ex) {
System.err.println("Failed to acquire socket from pool " + ex.getLocalizedMessage());
}
} catch (Exception ex) {
completableFuture.completeExceptionally(ex);
return completableFuture;
}
return completableFuture;
}
This is my ChannelPoolHandler
public class SimpleChannelPoolHandler implements ChannelPoolHandler {
private ClientRequestDTO clientRequestDTO;
private boolean ssl;
private URI uri;
private int port;
SimpleChannelPoolHandler(URI uri) {
this.uri = uri;
if(uri != null) {
this.port = uri.getPort() != -1 ? uri.getPort() : "https".equals(uri.getScheme()) ? 443 : 80;
ssl = "https".equalsIgnoreCase(uri.getScheme());
}
}
#Override
public void channelReleased(Channel ch) throws Exception {
System.out.println("Channel released: " + ch.toString());
}
#Override
public void channelAcquired(Channel ch) throws Exception {
System.out.println("Channel acquired: " + ch.toString());
}
#Override
public void channelCreated(Channel ch) throws Exception {
System.out.println("Channel created: " + ch.toString());
int readTimeout = Integer.parseInt(ApplicationEnvironment.getInstance().get("READ_TIMEOUT"));
SocketChannelConfig channelConfig = (SocketChannelConfig) ch.config();
channelConfig.setConnectTimeoutMillis(2000);
ChannelPipeline channelPipeline = ch.pipeline();
if(ssl) {
SslContext sslContext = SslContextBuilder.forClient().trustManager(InsecureTrustManagerFactory.INSTANCE).build();
channelPipeline.addLast("ssl", sslContext.newHandler(ch.alloc(), uri.getHost(), this.port));
}
channelPipeline.addLast("client codec", new HttpClientCodec());
channelPipeline.addLast("chunked content writer",new ChunkedWriteHandler());
channelPipeline.addLast("content decompressor", new HttpContentDecompressor());
channelPipeline.addLast("read timeout", new ReadTimeoutHandler(readTimeout, TimeUnit.MILLISECONDS));
channelPipeline.addLast("business logic", new JerseyNettyClientHandler(this.uri));
}
}
And this is my SimpleInboundHandler
public class JerseyNettyClientHandler extends SimpleChannelInboundHandler<HttpObject> {
private final NettyChannelPoolConnector nettyChannelPoolConnector;
private final LinkedBlockingDeque<InputStream> isList = new LinkedBlockingDeque<>();
private final AsyncConnectorCallback asyncConnectorCallback;
private final ClientRequest jerseyRequest;
private final CompletableFuture future;
public JerseyNettyClientHandler(ClientRequestDto clientRequestDTO) {
this.nettyChannelPoolConnector = clientRequestDTO.getNettyChannelPoolConnector();
ClientRequestDTO cdto = clientRequestDTO.getNettyChannelPoolConnector().getDtoMap().get(clientRequestDTO.getClientRequest());
this.asyncConnectorCallback = cdto.getCallback();
this.jerseyRequest = cdto.getClientRequest();
this.future = cdto.getFuture();
}
#Override
protected void channelRead0(ChannelHandlerContext ctx, HttpObject msg) throws Exception {
if(msg instanceof HttpResponse) {
final HttpResponse httpResponse = (HttpResponse) msg;
final ClientResponse response = new ClientResponse(new Response.StatusType() {
#Override
public int getStatusCode() {
return httpResponse.status().code();
}
#Override
public Response.Status.Family getFamily() {
return Response.Status.Family.familyOf(httpResponse.status().code());
}
#Override
public String getReasonPhrase() {
return httpResponse.status().reasonPhrase();
}
}, jerseyRequest);
for (Map.Entry<String, String> entry : httpResponse.headers().entries()) {
response.getHeaders().add(entry.getKey(), entry.getValue());
}
if((httpResponse.headers().contains(HttpHeaderNames.CONTENT_LENGTH) && HttpUtil.getContentLength(httpResponse) > 0) || HttpUtil.isTransferEncodingChunked(httpResponse)) {
ctx.channel().closeFuture().addListener(future -> isList.add(NettyInputStream.END_OF_INPUT_ERROR));
response.setEntityStream(new NettyInputStream(isList));
} else {
response.setEntityStream(new InputStream() {
#Override
public int read() {
return -1;
}
});
}
if(asyncConnectorCallback != null) {
nettyChannelPoolConnector.executorService.execute(() -> {
asyncConnectorCallback.response(response);
future.complete(response);
});
}
}
if(msg instanceof HttpContent) {
HttpContent content = (HttpContent) msg;
ByteBuf byteContent = content.content();
if(byteContent.isReadable()) {
byte[] bytes = new byte[byteContent.readableBytes()];
byteContent.getBytes(byteContent.readerIndex(), bytes);
isList.add(new ByteArrayInputStream(bytes));
}
}
if(msg instanceof LastHttpContent) {
isList.add(NettyInputStream.END_OF_INPUT);
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
if(asyncConnectorCallback != null) {
nettyChannelPoolConnector.executorService.execute(() -> asyncConnectorCallback.failure(cause));
}
future.completeExceptionally(cause);
isList.add(NettyInputStream.END_OF_INPUT_ERROR);
}
The references needed to be passed to the SimpleChannelInboundHandler is what is packed into the ClientRequestDTO as seen in the first code block.
I am not sure as it is not a tested code. But it could be achieved by the following code.
SimpleChannelPool sPool = poolMap.get(Req.getAddress());
Future<Channel> f = sPool.acquire();
f.get().pipeline().addLast("inbound", new NettyClientInBoundHandler(Req, jbContext, ReportData));
f.addListener(new NettyClientFutureListener(this.Req, sPool));
where Req, jbContext, ReportData could be input data for InboundHandler().
I'm trying to add a new field to request's body, in a Zuul Pre-filter.
I'm using one of the Neflix's Zuul sample projects from here, and my filter's implementation is very similar to UppercaseRequestEntityFilter from this sample.
I was able to apply a transformation such as uppercase, or even to completely modify the request, the only inconvenient is that I'm not able to modify the content of body's request that has a length more than the original length of the body's request.
This is my filter's implementation:
#Component
public class MyRequestEntityFilter extends ZuulFilter {
public String filterType() {
return "pre";
}
public int filterOrder() {
return 10;
}
public boolean shouldFilter() {
RequestContext context = getCurrentContext();
return true;
}
public Object run() {
try {
RequestContext context = getCurrentContext();
InputStream in = (InputStream) context.get("requestEntity");
if (in == null) {
in = context.getRequest().getInputStream();
}
String body = StreamUtils.copyToString(in, Charset.forName("UTF-8"));
body = body.replaceFirst("qqq", "qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq");
// body = body.toUpperCase();
context.set("requestEntity", new ServletInputStreamWrapper(body.getBytes("UTF-8")));
}
catch (IOException e) {
rethrowRuntimeException(e);
}
return null;
}
}
This is the request that I'm doing:
This is the response that I'm receiving:
I was able to obtain what I wanted, using the implementation of PrefixRequestEntityFilter, from sample-zuul-examples:
#Component
public class MyRequestEntityFilter extends ZuulFilter {
public String filterType() {
return "pre";
}
public int filterOrder() {
return 10;
}
public boolean shouldFilter() {
RequestContext context = getCurrentContext();
return true;
}
public Object run() {
try {
RequestContext context = getCurrentContext();
InputStream in = (InputStream) context.get("requestEntity");
if (in == null) {
in = context.getRequest().getInputStream();
}
String body = StreamUtils.copyToString(in, Charset.forName("UTF-8"));
body = body.replaceFirst("qqq", "qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq");
byte[] bytes = body.getBytes("UTF-8");
context.setRequest(new HttpServletRequestWrapper(getCurrentContext().getRequest()) {
#Override
public ServletInputStream getInputStream() throws IOException {
return new ServletInputStreamWrapper(bytes);
}
#Override
public int getContentLength() {
return bytes.length;
}
#Override
public long getContentLengthLong() {
return bytes.length;
}
});
}
catch (IOException e) {
rethrowRuntimeException(e);
}
return null;
}
}
here is my code :
// Observable from RxView
RxView.clicks(mBtnLogin)
.throttleFirst(500, TimeUnit.MILLISECONDS)
.subscribe(new Action1<Void>() {
#Override
public void call(Void aVoid) {
String userName = mEditUserName.getText().toString();
String passWord = mEditPassWord.getText().toString();
if (TextUtils.isEmpty(userName)) {
Toast.makeText(LoginActivity.this, R.string.input_user_name, Toast.LENGTH_SHORT).show();
return;
}
if (TextUtils.isEmpty(passWord)) {
Toast.makeText(LoginActivity.this, R.string.input_pass_word, Toast.LENGTH_SHORT).show();
return;
}
LoginAction action = Constants.retrofit().create(LoginAction.class);
// Observable from Retrofit
Observable<String> call = action.login(userName, MD5.encode(passWord));
call.subscribeOn(Schedulers.io())
.subscribe(new Observer<String>() {
#Override
public void onCompleted() {
System.out.println("completed");
}
#Override
public void onError(Throwable e) {
e.printStackTrace();
}
#Override
public void onNext(String s) {
System.out.println("next" + s);
}
});
}
});
Is there any way you could combine the Observable from RxView and the Observable from retrofit ?
i think the code is ugly and Do not meet the ReactiveX's specifications.
Yes, you would use the .flatMap() operator:
RxView.clicks(mButton)
.throttleFirst(500, TimeUnit.MILLISECONDS)
.subscribeOn(AndroidSchedulers.mainThread())
.flatMap(new Func1<Void, Observable<Response>>() {
#Override
public Observable<Response> call(Void aVoid) {
return apiService.getResponse().subscribeOn(Schedulers.io());
}
})
.observeOn(AndroidSchedulers.mainThread())
.subscribe();
would look a bit better with lambdas:
RxView.clicks(mButton)
.throttleFirst(500, TimeUnit.MILLISECONDS)
.subscribeOn(AndroidSchedulers.mainThread())
.flatMap(aVoid -> apiService.getResponse().subscribeOn(Schedulers.io()))
.observeOn(AndroidSchedulers.mainThread())
.subscribe();
I have implemented simple RxEventBus which starts emitting events, even if there is no subscribers. I want to cache last emitted event, so that if first/next subscriber subscribes, it receive only one (last) item.
I created test class which describes my problem:
public class RxBus {
ApplicationsRxEventBus applicationsRxEventBus;
public RxBus() {
applicationsRxEventBus = new ApplicationsRxEventBus();
}
public static void main(String[] args) {
RxBus rxBus = new RxBus();
rxBus.start();
}
private void start() {
ExecutorService executorService = Executors.newScheduledThreadPool(2);
Runnable runnable0 = () -> {
while (true) {
long currentTime = System.currentTimeMillis();
System.out.println("emiting: " + currentTime);
applicationsRxEventBus.emit(new ApplicationsEvent(currentTime));
try {
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
};
Runnable runnable1 = () -> applicationsRxEventBus
.getBus()
.subscribe(new Subscriber<ApplicationsEvent>() {
#Override
public void onCompleted() {
}
#Override
public void onError(Throwable throwable) {
}
#Override
public void onNext(ApplicationsEvent applicationsEvent) {
System.out.println("runnable 1: " + applicationsEvent.number);
}
});
Runnable runnable2 = () -> applicationsRxEventBus
.getBus()
.subscribe(new Subscriber<ApplicationsEvent>() {
#Override
public void onCompleted() {
}
#Override
public void onError(Throwable throwable) {
}
#Override
public void onNext(ApplicationsEvent applicationsEvent) {
System.out.println("runnable 2: " + applicationsEvent.number);
}
});
executorService.execute(runnable0);
try {
Thread.sleep(3000);
} catch (InterruptedException e) {
e.printStackTrace();
}
executorService.execute(runnable1);
try {
Thread.sleep(3000);
} catch (InterruptedException e) {
e.printStackTrace();
}
executorService.execute(runnable2);
}
private class ApplicationsRxEventBus {
private final Subject<ApplicationsEvent, ApplicationsEvent> mRxBus;
private final Observable<ApplicationsEvent> mBusObservable;
public ApplicationsRxEventBus() {
mRxBus = new SerializedSubject<>(BehaviorSubject.<ApplicationsEvent>create());
mBusObservable = mRxBus.cache();
}
public void emit(ApplicationsEvent event) {
mRxBus.onNext(event);
}
public Observable<ApplicationsEvent> getBus() {
return mBusObservable;
}
}
private class ApplicationsEvent {
long number;
public ApplicationsEvent(long number) {
this.number = number;
}
}
}
runnable0 is emitting events even if there is no subscribers. runnable1 subscribes after 3 sec, and receives last item (and this is ok). But runnable2 subscribes after 3 sec after runnable1, and receives all items, which runnable1 received. I only need last item to be received for runnable2. I have tried cache events in RxBus:
private class ApplicationsRxEventBus {
private final Subject<ApplicationsEvent, ApplicationsEvent> mRxBus;
private final Observable<ApplicationsEvent> mBusObservable;
private ApplicationsEvent event;
public ApplicationsRxEventBus() {
mRxBus = new SerializedSubject<>(BehaviorSubject.<ApplicationsEvent>create());
mBusObservable = mRxBus;
}
public void emit(ApplicationsEvent event) {
this.event = event;
mRxBus.onNext(event);
}
public Observable<ApplicationsEvent> getBus() {
return mBusObservable.doOnSubscribe(() -> emit(event));
}
}
But problem is, that when runnable2 subscribes, runnable1 receives event twice:
emiting: 1447183225122
runnable 1: 1447183225122
runnable 1: 1447183225122
runnable 2: 1447183225122
emiting: 1447183225627
runnable 1: 1447183225627
runnable 2: 1447183225627
I am sure, that there is RxJava operator for this. How to achieve this?
Your ApplicationsRxEventBus does extra work by reemitting a stored event whenever one Subscribes in addition to all the cached events.
You only need a single BehaviorSubject + toSerialized as it will hold onto the very last event and re-emit it to Subscribers by itself.
You are using the wrong interface. When you susbscribe to a cold Observable you get all of its events. You need to turn it into hot Observable first. This is done by creating a ConnectableObservable from your Observable using its publish method. Your Observers then call connect to start receiving events.
You can also read more about in the Hot and Cold observables section of the tutorial.