I am using OKHTTP for networking and currently get a charStream from response.charStream() which I then pass for GSON for parsing. Once parsed and inflated, I deflate the model again to save to disk using a stream. It seems like extra work to have to go from networkReader to Model to DiskWriter. Is it possible with OKIO to instead go from networkReader to JSONParser(reader) as well as networkReader to DiskWriter(reader). Basically I want to to be able to read from the network stream twice.
You can use a MirroredSource (taken from this gist).
public class MirroredSource {
private final Buffer buffer = new Buffer();
private final Source source;
private final AtomicBoolean sourceExhausted = new AtomicBoolean();
public MirroredSource(final Source source) {
this.source = source;
}
public Source original() {
return new okio.Source() {
#Override
public long read(final Buffer sink, final long byteCount) throws IOException {
final long bytesRead = source.read(sink, byteCount);
if (bytesRead > 0) {
synchronized (buffer) {
sink.copyTo(buffer, sink.size() - bytesRead, bytesRead);
// Notfiy the mirror to continue
buffer.notify();
}
} else {
sourceExhausted.set(true);
}
return bytesRead;
}
#Override
public Timeout timeout() {
return source.timeout();
}
#Override
public void close() throws IOException {
source.close();
sourceExhausted.set(true);
synchronized (buffer) {
buffer.notify();
}
}
};
}
public Source mirror() {
return new okio.Source() {
#Override
public long read(final Buffer sink, final long byteCount) throws IOException {
synchronized (buffer) {
while (!sourceExhausted.get()) {
// only need to synchronise on reads when the source is not exhausted.
if (buffer.request(byteCount)) {
return buffer.read(sink, byteCount);
} else {
try {
buffer.wait();
} catch (final InterruptedException e) {
//No op
}
}
}
}
return buffer.read(sink, byteCount);
}
#Override
public Timeout timeout() {
return new Timeout();
}
#Override
public void close() throws IOException { /* not used */ }
};
}
}
Usage would look like:
MirroredSource mirroredSource = new MirroredSource(response.body().source()); //Or however you're getting your original source
Source originalSource = mirroredSource.original();
Source secondSource = mirroredSource.mirror();
doParsing(originalSource);
writeToDisk(secondSource);
originalSource.close();
If you want something more robust you can repurpose Relay from OkHttp.
Related
I have started working on a Freemarker Debugger using breakpoints etc. The supplied framework is based on java RMI. So far I get it to suspend at one breakpoint but then ... nothing.
Is there a very basic example setup for the serverpart and the client part other then the debug/imp classes supplied with the sources. That would be of great help.
this is my server class:
class DebuggerServer {
private final int port;
private final String templateName1;
private final Environment templateEnv;
private boolean stop = false;
public DebuggerServer(String templateName) throws IOException {
System.setProperty("freemarker.debug.password", "hello");
port = SecurityUtilities.getSystemProperty("freemarker.debug.port", Debugger.DEFAULT_PORT).intValue();
System.setProperty("freemarker.debug.password", "hello");
Configuration cfg = new Configuration();
// Some other recommended settings:
cfg.setIncompatibleImprovements(new Version(2, 3, 20));
cfg.setDefaultEncoding("UTF-8");
cfg.setLocale(Locale.US);
cfg.setTemplateExceptionHandler(TemplateExceptionHandler.RETHROW_HANDLER);
Template template = cfg.getTemplate(templateName);
templateName1 = template.getName();
System.out.println("Debugging " + templateName1);
Map<String, Object> root = new HashMap();
Writer consoleWriter = new OutputStreamWriter(System.out);
templateEnv = new Environment(template, null, consoleWriter);
DebuggerService.registerTemplate(template);
}
public void start() {
new Thread(new Runnable() {
#Override
public void run() {
startInternal();
}
}, "FreeMarker Debugger Server Acceptor").start();
}
private void startInternal() {
boolean handled = false;
while (!stop) {
List breakPoints = DebuggerService.getBreakpoints(templateName1);
for (int i = 0; i < breakPoints.size(); i++) {
try {
Breakpoint bp = (Breakpoint) breakPoints.get(i);
handled = DebuggerService.suspendEnvironment(templateEnv, templateName1, bp.getLine());
} catch (RemoteException e) {
System.err.println(e.getMessage());
}
}
}
}
public void stop() {
this.stop = true;
}
}
This is the client class:
class DebuggerClientHandler {
private final Debugger client;
private boolean stop = false;
public DebuggerClientHandler(String templateName) throws IOException {
// System.setProperty("freemarker.debug.password", "hello");
// System.setProperty("java.rmi.server.hostname", "192.168.2.160");
client = DebuggerClient.getDebugger(InetAddress.getByName("localhost"), Debugger.DEFAULT_PORT, "hello");
client.addDebuggerListener(environmentSuspendedEvent -> {
System.out.println("Break " + environmentSuspendedEvent.getName() + " at line " + environmentSuspendedEvent.getLine());
// environmentSuspendedEvent.getEnvironment().resume();
});
}
public void start() {
new Thread(new Runnable() {
#Override
public void run() {
startInternal();
}
}, "FreeMarker Debugger Server").start();
}
private void startInternal() {
while (!stop) {
}
}
public void stop() {
this.stop = true;
}
public void addBreakPoint(String s, int i) throws RemoteException {
Breakpoint bp = new Breakpoint(s, i);
List breakpoints = client.getBreakpoints();
client.addBreakpoint(bp);
}
}
Liferay IDE (https://github.com/liferay/liferay-ide) has FreeMarker template debug support (https://issues.liferay.com/browse/IDE-976), so somehow they managed to use it. I have never seen it in action though. Other than that, I'm not aware of anything that uses the debug API.
I have wasted several hours trying to solve a issue with the use of netty's channel pool map and a jax rs client.
I have used jersey's own netty connector as an inspiration but exchanged netty's channel with netty's channel pool map.
https://jersey.github.io/apidocs/2.27/jersey/org/glassfish/jersey/netty/connector/NettyConnectorProvider.html
My problem is that I have references that I need inside my custom SimpleChannelInboundHandler. However by the design of netty's way to create a channel pool map, I can not pass the references through my custom ChannelPoolHandler, because as soon as the pool map has created a pool the constructor of the channel pool handler never runs again.
This is the method where it makes acquires a pool and check out a channel to make a HTTP request.
#Override
public Future<?> apply(ClientRequest request, AsyncConnectorCallback callback) {
final CompletableFuture<Object> completableFuture = new CompletableFuture<>();
try{
HttpRequest httpRequest = buildHttpRequest(request);
// guard against prematurely closed channel
final GenericFutureListener<io.netty.util.concurrent.Future<? super Void>> closeListener =
future -> {
if (!completableFuture.isDone()) {
completableFuture.completeExceptionally(new IOException("Channel closed."));
}
};
try {
ClientRequestDTO clientRequestDTO = new ClientRequestDTO(NettyChannelPoolConnector.this, request, completableFuture, callback);
dtoMap.putIfAbsent(request.getUri(), clientRequestDTO);
// Retrieves a channel pool for the given host
FixedChannelPool pool = this.poolMap.get(clientRequestDTO);
// Acquire a new channel from the pool
io.netty.util.concurrent.Future<Channel> f = pool.acquire();
f.addListener((FutureListener<Channel>) futureWrite -> {
//Succeeded with acquiring a channel
if (futureWrite.isSuccess()) {
Channel channel = futureWrite.getNow();
channel.closeFuture().addListener(closeListener);
try {
if(request.hasEntity()) {
channel.writeAndFlush(httpRequest);
final JerseyChunkedInput jerseyChunkedInput = new JerseyChunkedInput(channel);
request.setStreamProvider(contentLength -> jerseyChunkedInput);
if(HttpUtil.isTransferEncodingChunked(httpRequest)) {
channel.write(jerseyChunkedInput);
} else {
channel.write(jerseyChunkedInput);
}
executorService.execute(() -> {
channel.closeFuture().removeListener(closeListener);
try {
request.writeEntity();
} catch (IOException ex) {
callback.failure(ex);
completableFuture.completeExceptionally(ex);
}
});
channel.flush();
} else {
channel.closeFuture().removeListener(closeListener);
channel.writeAndFlush(httpRequest);
}
} catch (Exception ex) {
System.err.println("Failed to sync and flush http request" + ex.getLocalizedMessage());
}
pool.release(channel);
}
});
} catch (NullPointerException ex) {
System.err.println("Failed to acquire socket from pool " + ex.getLocalizedMessage());
}
} catch (Exception ex) {
completableFuture.completeExceptionally(ex);
return completableFuture;
}
return completableFuture;
}
This is my ChannelPoolHandler
public class SimpleChannelPoolHandler implements ChannelPoolHandler {
private ClientRequestDTO clientRequestDTO;
private boolean ssl;
private URI uri;
private int port;
SimpleChannelPoolHandler(URI uri) {
this.uri = uri;
if(uri != null) {
this.port = uri.getPort() != -1 ? uri.getPort() : "https".equals(uri.getScheme()) ? 443 : 80;
ssl = "https".equalsIgnoreCase(uri.getScheme());
}
}
#Override
public void channelReleased(Channel ch) throws Exception {
System.out.println("Channel released: " + ch.toString());
}
#Override
public void channelAcquired(Channel ch) throws Exception {
System.out.println("Channel acquired: " + ch.toString());
}
#Override
public void channelCreated(Channel ch) throws Exception {
System.out.println("Channel created: " + ch.toString());
int readTimeout = Integer.parseInt(ApplicationEnvironment.getInstance().get("READ_TIMEOUT"));
SocketChannelConfig channelConfig = (SocketChannelConfig) ch.config();
channelConfig.setConnectTimeoutMillis(2000);
ChannelPipeline channelPipeline = ch.pipeline();
if(ssl) {
SslContext sslContext = SslContextBuilder.forClient().trustManager(InsecureTrustManagerFactory.INSTANCE).build();
channelPipeline.addLast("ssl", sslContext.newHandler(ch.alloc(), uri.getHost(), this.port));
}
channelPipeline.addLast("client codec", new HttpClientCodec());
channelPipeline.addLast("chunked content writer",new ChunkedWriteHandler());
channelPipeline.addLast("content decompressor", new HttpContentDecompressor());
channelPipeline.addLast("read timeout", new ReadTimeoutHandler(readTimeout, TimeUnit.MILLISECONDS));
channelPipeline.addLast("business logic", new JerseyNettyClientHandler(this.uri));
}
}
And this is my SimpleInboundHandler
public class JerseyNettyClientHandler extends SimpleChannelInboundHandler<HttpObject> {
private final NettyChannelPoolConnector nettyChannelPoolConnector;
private final LinkedBlockingDeque<InputStream> isList = new LinkedBlockingDeque<>();
private final AsyncConnectorCallback asyncConnectorCallback;
private final ClientRequest jerseyRequest;
private final CompletableFuture future;
public JerseyNettyClientHandler(ClientRequestDto clientRequestDTO) {
this.nettyChannelPoolConnector = clientRequestDTO.getNettyChannelPoolConnector();
ClientRequestDTO cdto = clientRequestDTO.getNettyChannelPoolConnector().getDtoMap().get(clientRequestDTO.getClientRequest());
this.asyncConnectorCallback = cdto.getCallback();
this.jerseyRequest = cdto.getClientRequest();
this.future = cdto.getFuture();
}
#Override
protected void channelRead0(ChannelHandlerContext ctx, HttpObject msg) throws Exception {
if(msg instanceof HttpResponse) {
final HttpResponse httpResponse = (HttpResponse) msg;
final ClientResponse response = new ClientResponse(new Response.StatusType() {
#Override
public int getStatusCode() {
return httpResponse.status().code();
}
#Override
public Response.Status.Family getFamily() {
return Response.Status.Family.familyOf(httpResponse.status().code());
}
#Override
public String getReasonPhrase() {
return httpResponse.status().reasonPhrase();
}
}, jerseyRequest);
for (Map.Entry<String, String> entry : httpResponse.headers().entries()) {
response.getHeaders().add(entry.getKey(), entry.getValue());
}
if((httpResponse.headers().contains(HttpHeaderNames.CONTENT_LENGTH) && HttpUtil.getContentLength(httpResponse) > 0) || HttpUtil.isTransferEncodingChunked(httpResponse)) {
ctx.channel().closeFuture().addListener(future -> isList.add(NettyInputStream.END_OF_INPUT_ERROR));
response.setEntityStream(new NettyInputStream(isList));
} else {
response.setEntityStream(new InputStream() {
#Override
public int read() {
return -1;
}
});
}
if(asyncConnectorCallback != null) {
nettyChannelPoolConnector.executorService.execute(() -> {
asyncConnectorCallback.response(response);
future.complete(response);
});
}
}
if(msg instanceof HttpContent) {
HttpContent content = (HttpContent) msg;
ByteBuf byteContent = content.content();
if(byteContent.isReadable()) {
byte[] bytes = new byte[byteContent.readableBytes()];
byteContent.getBytes(byteContent.readerIndex(), bytes);
isList.add(new ByteArrayInputStream(bytes));
}
}
if(msg instanceof LastHttpContent) {
isList.add(NettyInputStream.END_OF_INPUT);
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
if(asyncConnectorCallback != null) {
nettyChannelPoolConnector.executorService.execute(() -> asyncConnectorCallback.failure(cause));
}
future.completeExceptionally(cause);
isList.add(NettyInputStream.END_OF_INPUT_ERROR);
}
The references needed to be passed to the SimpleChannelInboundHandler is what is packed into the ClientRequestDTO as seen in the first code block.
I am not sure as it is not a tested code. But it could be achieved by the following code.
SimpleChannelPool sPool = poolMap.get(Req.getAddress());
Future<Channel> f = sPool.acquire();
f.get().pipeline().addLast("inbound", new NettyClientInBoundHandler(Req, jbContext, ReportData));
f.addListener(new NettyClientFutureListener(this.Req, sPool));
where Req, jbContext, ReportData could be input data for InboundHandler().
I want to build an API based on Futures (from java.util.concurrent) that is powered by a custom protocol on top of Netty (version 4). Basic idea is to write a simple library that would abstract the underlying Netty implementation and make it easier to make requests.
Using this library, one should be able to write something like this:
Request req = new Request(...);
Future<Response> responseFuture = new ServerIFace(host, port).call(req);
// For example, let's block until this future is resolved
Reponse res = responseFuture.get().getResult();
Underneath this code, a Netty client is connected
public class ServerIFace {
private Bootstrap bootstrap;
private EventLoopGroup workerGroup;
private String host;
private int port;
public ServerIFace(String host, int port) {
this.host = host;
this.port = port;
this.workerGroup = new NioEventLoopGroup();
bootstrap();
}
private void bootstrap() {
bootstrap = new Bootstrap();
bootstrap.group(workerGroup);
bootstrap.channel(NioSocketChannel.class);
bootstrap.handler(new ChannelInitializer<SocketChannel>() {
#Override
protected void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new ObjectEncoder());
ch.pipeline().addLast(new ObjectDecoder(ClassResolvers.cacheDisabled(Response.class.getClassLoader())));
ch.pipeline().addLast("response", new ResponseReceiverChannelHandler());
}
});
}
public Future<Response> call(final Request request) throws InterruptedException {
CompletableFuture<Response> responseFuture = new CompletableFuture<>();
Channel ch = bootstrap.connect(host, port).sync().channel();
ch.writeAndFlush(request).addListener((f) -> {
if (f.isSuccess()) {
System.out.println("Wrote successfully");
} else {
f.cause().printStackTrace();
}
});
ChannelFuture closeFuture = ch.closeFuture();
// Have to 'convert' ChannelFuture to java.util.concurrent.Future
closeFuture.addListener((f) -> {
if (f.isSuccess()) {
// How to get this response?
Response response = ((ResponseReceiverChannelHandler) ch.pipeline().get("response")).getResponse();
responseFuture.complete(response);
} else {
f.cause().printStackTrace();
responseFuture.cancel(true);
}
ch.close();
}).sync();
return responseFuture;
}
}
Now, as you can see, in order to abstract Netty's inner ChannelFuture, I have to 'convert' it to Java's Future (I'm aware that ChannelFuture is derived from Future, but that information doesn't seem useful at this point).
Right now, I'm capturing this Response object in the last handler of my inbound part of the client pipeline, the ResponseReceiverChannelHandler.
public class ResponseReceiverChannelHandler extends ChannelInboundHandlerAdapter {
private Response response = null;
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
this.response = (Response)msg;
ctx.close();
}
public Response getResponse() {
return response;
}
}
Since I'm new to Netty and these things in general, I'm looking for a cleaner, thread-safe way of delivering this object to the API user.
Correct me if I'm wrong, but none of the Netty examples show how to achieve this, and most of the Client examples just print out whatever they get from Server.
Please note that my main goal here is to learn more about Netty, and that this code has no production purposes.
For the reference (although I don't think it's that relevant) here's the Server code.
public class Server {
public static class RequestProcessorHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
ChannelFuture future;
if (msg instanceof Request) {
Request req = (Request)msg;
Response res = some function of req
future = ctx.writeAndFlush(res);
} else {
future = ctx.writeAndFlush("Error, not a request!");
}
future.addListener((f) -> {
if (f.isSuccess()) {
System.out.println("Response sent!");
} else {
System.out.println("Response not sent!");
f.cause().printStackTrace();
}
});
}
}
public int port;
public Server(int port) {
this.port = port;
}
public void run() throws Exception {
EventLoopGroup bossGroup = new NioEventLoopGroup();
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new ObjectDecoder(ClassResolvers.cacheDisabled(Request.class.getClassLoader())));
ch.pipeline().addLast(new ObjectEncoder());
// Not really shutting down this threadpool but it's ok for now
ch.pipeline().addLast(new DefaultEventExecutorGroup(2), new RequestProcessorHandler());
}
})
.option(ChannelOption.SO_BACKLOG, 128)
.childOption(ChannelOption.SO_KEEPALIVE, true);
ChannelFuture f = b.bind(port).sync();
f.channel().closeFuture().sync();
} finally {
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
}
public static void main(String[] args) throws Exception {
int port;
if (args.length > 0) {
port = Integer.parseInt(args[0]);
} else {
port = 8080;
}
new Server(port).run();
}
}
We are working with project reactor and having a huge problem right now. This is how we produce (publish our data):
public Flux<String> getAllFlux() {
return Flux.<String>create(sink -> {
new Thread(){
public void run(){
Iterator<Cache.Entry<String, MyObject>> iterator = getAllIterator();
ObjectMapper mapper = new ObjectMapper();
while(iterator.hasNext()) {
try {
sink.next(mapper.writeValueAsString(iterator.next().getValue()));
} catch (IOException e) {
e.printStackTrace();
}
}
sink.complete();
}
} .start();
});
}
As you can see we are taking data from an iterator and are publishing each item in that iterator as a json string. Our subscriber does the following:
flux.subscribe(new Subscriber<String>() {
private Subscription s;
int amount = 1; // the amount of received flux payload at a time
int onNextAmount;
String completeItem="";
ObjectMapper mapper = new ObjectMapper();
#Override
public void onSubscribe(Subscription s) {
System.out.println("subscribe");
this.s = s;
this.s.request(amount);
}
#Override
public void onNext(String item) {
MyObject myObject = null;
try {
System.out.println(item);
myObject = mapper.readValue(completeItem, MyObject.class);
System.out.println(myObject.toString());
} catch (IOException e) {
System.out.println(item);
System.out.println("failed: " + e.getLocalizedMessage());
}
onNextAmount++;
if (onNextAmount % amount == 0) {
this.s.request(amount);
}
}
#Override
public void onError(Throwable t) {
System.out.println(t.getLocalizedMessage())
}
#Override
public void onComplete() {
System.out.println("completed");
});
}
As you can see we are simply printing the String item which we receive and parsing it into an object using jackson wrapper. The problem we got now is that for most of our items everything works fine:
{"itemId": "someId", "itemDesc", "some description"}
But for some items the String is cut off like this for example:
{"itemId": "some"
And the next item after that would be
"Id", "itemDesc", "some description"}
There is no pattern for those cuts. It is completely random and it is different everytime we run that code. Ofcourse our jackson is gettin an error Unexpected end of Input with that behaviour.
So what is causing such a behaviour and how can we solve it?
Solution:
Send the Object inside the flux instead of the String:
public Flux<ItemIgnite> getAllFlux() {
return Flux.create(sink -> {
new Thread(){
public void run(){
Iterator<Cache.Entry<String, ItemIgnite>> iterator = getAllIterator();
while(iterator.hasNext()) {
sink.next(iterator.next().getValue());
}
}
} .start();
});
}
and use the following produces type:
#RequestMapping(value="/allFlux", method=RequestMethod.GET, produces="application/stream+json")
The key here is to use stream+json and not only json.
In this case just odd lines have meaningful data and there is no character that uniquely identifies those lines. My intention is to get something equivalent to the following example:
Stream<DomainObject> res = Files.lines(src)
.filter(line -> isOddLine())
.map(line -> toDomainObject(line))
Is there any “clean” way to do it, without sharing global state?
No, there's no way to do this conveniently with the API. (Basically the same reason as to why there is no easy way of having a zipWithIndex, see Is there a concise way to iterate over a stream with indices in Java 8?).
You can still use Stream, but go for an iterator:
Iterator<String> iter = Files.lines(src).iterator();
while (iter.hasNext()) {
iter.next(); // discard
toDomainObject(iter.next()); // use
}
(You might want to use try-with-resource on that stream though.)
A clean way is to go one level deeper and implement a Spliterator. On this level you can control the iteration over the stream elements and simply iterate over two items whenever the downstream requests one item:
public class OddLines<T> extends Spliterators.AbstractSpliterator<T>
implements Consumer<T> {
public static <T> Stream<T> oddLines(Stream<T> source) {
return StreamSupport.stream(new OddLines(source.spliterator()), false);
}
private static long odd(long l) { return l==Long.MAX_VALUE? l: (l+1)/2; }
Spliterator<T> originalLines;
OddLines(Spliterator<T> source) {
super(odd(source.estimateSize()), source.characteristics());
originalLines=source;
}
#Override
public boolean tryAdvance(Consumer<? super T> action) {
if(originalLines==null || !originalLines.tryAdvance(action))
return false;
if(!originalLines.tryAdvance(this)) originalLines=null;
return true;
}
#Override
public void accept(T t) {}
}
Then you can use it like
Stream<DomainObject> res = OddLines.oddLines(Files.lines(src))
.map(line -> toDomainObject(line));
This solution has no side effects and retains most advantages of the Stream API like the lazy evaluation. However, it should be clear that it hasn’t a useful semantics for unordered stream processing (beware about the subtle aspects like using forEachOrdered rather than forEach when performing a terminal action on all elements) and while supporting parallel processing in principle, it’s unlikely to be very efficient…
As aioobe said, there isn't a convenient way to do this, but there are several inconvenient ways. :-)
Here's another spliterator-based approach. Unlike Holger's, which wraps another spliterator, this one does the I/O itself. This gives greater control over things like ordering, but it also means that it has to deal with IOException and close handling. I also threw in a Predicate parameter that lets you get a crack at which lines get passed through.
static class LineSpliterator extends Spliterators.AbstractSpliterator<String>
implements AutoCloseable {
final BufferedReader br;
final LongPredicate pred;
long count = 0L;
public LineSpliterator(Path path, LongPredicate pred) throws IOException {
super(Long.MAX_VALUE, Spliterator.ORDERED);
br = Files.newBufferedReader(path);
this.pred = pred;
}
#Override
public boolean tryAdvance(Consumer<? super String> action) {
try {
String s;
while ((s = br.readLine()) != null) {
if (pred.test(++count)) {
action.accept(s);
return true;
}
}
return false;
} catch (IOException ioe) {
throw new UncheckedIOException(ioe);
}
}
#Override
public void close() {
try {
br.close();
} catch (IOException ioe) {
throw new UncheckedIOException(ioe);
}
}
public static Stream<String> lines(Path path, LongPredicate pred) throws IOException {
LineSpliterator ls = new LineSpliterator(path, pred);
return StreamSupport.stream(ls, false)
.onClose(() -> ls.close());
}
}
You'd use it within a try-with-resources to ensure that the file is closed, even if an exception occurs:
static void printOddLines() throws IOException {
try (Stream<String> lines = LineSpliterator.lines(PATH, x -> (x & 1L) == 1L)) {
lines.forEach(System.out::println);
}
}
You can do this with a custom spliterator:
public class EvenOdd {
public static final class EvenSpliterator<T> implements Spliterator<T> {
private final Spliterator<T> underlying;
boolean even;
public EvenSpliterator(Spliterator<T> underlying, boolean even) {
this.underlying = underlying;
this.even = even;
}
#Override
public boolean tryAdvance(Consumer<? super T> action) {
if (even) {
even = false;
return underlying.tryAdvance(action);
}
if (!underlying.tryAdvance(t -> {})) {
return false;
}
return underlying.tryAdvance(action);
}
#Override
public Spliterator<T> trySplit() {
if (!hasCharacteristics(SUBSIZED)) {
return null;
}
final Spliterator<T> newUnderlying = underlying.trySplit();
if (newUnderlying == null) {
return null;
}
final boolean oldEven = even;
if ((newUnderlying.estimateSize() & 1) == 1) {
even = !even;
}
return new EvenSpliterator<>(newUnderlying, oldEven);
}
#Override
public long estimateSize() {
return underlying.estimateSize()>>1;
}
#Override
public int characteristics() {
return underlying.characteristics();
}
}
public static void main(String[] args) {
final EvenSpliterator<Integer> spliterator = new EvenSpliterator<>(IntStream.range(1, 100000).parallel().mapToObj(Integer::valueOf).spliterator(), false);
final List<Integer> result = StreamSupport.stream(spliterator, true).parallel().collect(Collectors.toList());
final List<Integer> expected = IntStream.range(1, 100000 / 2).mapToObj(i -> i * 2).collect(Collectors.toList());
if (result.equals(expected)) {
System.out.println("Yay! Expected result.");
}
}
}
Following the #aioobe algorithm, here's another spliterator-based approach, as proposed by #Holger but more concise, even if less effective.
public static <T> Stream<T> filterOdd(Stream<T> src) {
Spliterator<T> iter = src.spliterator();
AbstractSpliterator<T> res = new AbstractSpliterator<T>(Long.MAX_VALUE, Spliterator.ORDERED)
{
#Override
public boolean tryAdvance(Consumer<? super T> action) {
iter.tryAdvance(item -> {}); // discard
return iter.tryAdvance(action); // use
}
};
return StreamSupport.stream(res, false);
}
Then you can use it like
Stream<DomainObject> res = Files.lines(src)
filterOdd(res)
.map(line -> toDomainObject(line))