Try monad in Java 8 - java-8

Is there a built-in support for monad that deals with exception handling? Something similar to Scala's Try. I am asking because I don't like unchecked exceptions.

There are at least two generally available (e.g. on Maven Central) - Vavr and Cyclops both have Try implementations that take a slightly differing approach.
Vavr's Try follows Scala's Try very closely. It will catch all 'non-fatal' exceptions thrown during the execution of it's combinators.
Cyclops Try will only catch explicitly configured exceptions (of course you can, by default, also have it catch everything), and the default mode of operating is only to catch during the initial population method. The reasoning behind this is so that Try behaves in a somewhat similar way to Optional - Optional doesn't encapsulate unexpected Null values (i.e. bugs), just places where we reasonable expect to have no value.
Here is an example Try With Resources from Cyclops
Try t2 = Try.catchExceptions(FileNotFoundException.class,IOException.class)
.init(()->PowerTuples.tuple(new BufferedReader(new FileReader("file.txt")),new FileReader("hello")))
.tryWithResources(this::read2);
And another example 'lifting' an existing method (that might divide by zero) to support error handling.
import static org.hamcrest.Matchers.equalTo;
import static org.junit.Assert.*;
import static com.aol.cyclops.lambda.api.AsAnyM.anyM;
import lombok.val;
val divide = Monads.liftM2(this::divide);
AnyM<Integer> result = divide.apply(anyM(Try.of(2, ArithmeticException.class)), anyM(Try.of(0)));
assertThat(result.<Try<Integer,ArithmeticException>>unwrapMonad().isFailure(),equalTo(true));
private Integer divide(Integer a, Integer b){
return a/b;
}

The "better-java-monads" project on GitHub has a Try monad for Java 8 here.

You can do what you want by (ab)using CompletableFuture. Please don't do this in any sort of production code.
CompletableFuture<Scanner> sc = CompletableFuture.completedFuture(
new Scanner(System.in));
CompletableFuture<Integer> divident = sc.thenApply(Scanner::nextInt);
CompletableFuture<Integer> divisor = sc.thenApply(Scanner::nextInt);
CompletableFuture<Integer> result = divident.thenCombine(divisor, (a,b) -> a/b);
result.whenComplete((val, ex) -> {
if (ex == null) {
System.out.printf("%s/%s = %s%n", divident.join(), divisor.join(), val);
} else {
System.out.println("Something went wrong");
}
});

Firstly, let me apologise for answering instead of commenting - apparently I need 50 reputation to comment ...
#ncaralicea your implementation is similar to my own but the problem I had was how to reconcile the try ... catch in bind() with the identity laws. Specifically return x >>= f is equivalent to f x. When bind() catches the exception then f x differs because it throws.
Moreover the ITransformer appears to be a -> b instead of a -> M b. My current version of bind(), unsatisfactory though I find it, is
public <R> MException<R> bind(final Function<T, MException<R>> f) {
Validate.notNull(f);
if (value.isRight())
try {
return f.apply(value.right().get());
} catch (final Exception ex) {
return new MException<>(Either.<Exception, R>left(ex));
}
else
return new MException<>(Either.<Exception, R>left(value.left().get()));
}
where value is an
Either<? extends Exception,T>
The problem with the identity law is that it requires the function f to catch exceptions which defeats the whole purpose of the exercise.
What I think you might actually want is the Functor and not the Monad. That is the fmap : (a->b) -> f a -> f b function.
If you write
#Override
public <R> MException<R> fmap(final Function<T, R> fn) {
Validate.notNull(fn);
if (value.isRight())
try {
return new MException<>(Either.<Exception, R>right(fn.apply(value.right().get())));
} catch (final Exception ex) {
return new MException<>(Either.<Exception, R>left(ex));
}
else
return new MException<>(Either.<Exception, R>left(value.left().get()));
}
then you don't need to write explicit exception handling code, implement new interfaces or mess with the Monad laws.

Here there is an implementation that could be used as a model.
Further information can be found here:
Java with Try, Failure, and Success based computations
You can basically do something like this:
public class Test {
public static void main(String[] args) {
ITransformer < String > t0 = new ITransformer < String > () {#
Override
public String transform(String t) {
//return t + t;
throw new RuntimeException("some exception 1");
}
};
ITransformer < String > t1 = new ITransformer < String > () {#
Override
public String transform(String t) {
return "<" + t + ">";
//throw new RuntimeException("some exception 2");
}
};
ComputationlResult < String > res = ComputationalTry.initComputation("1").bind(t0).bind(t1).getResult();
System.out.println(res);
if (res.isSuccess()) {
System.out.println(res.getResult());
} else {
System.out.println(res.getError());
}
}
}
And here is the code:
public class ComputationalTry < T > {
final private ComputationlResult < T > result;
static public < P > ComputationalTry < P > initComputation(P argument) {
return new ComputationalTry < P > (argument);
}
private ComputationalTry(T param) {
this.result = new ComputationalSuccess < T > (param);
}
private ComputationalTry(ComputationlResult < T > result) {
this.result = result;
}
private ComputationlResult < T > applyTransformer(T t, ITransformer < T > transformer) {
try {
return new ComputationalSuccess < T > (transformer.transform(t));
} catch (Exception throwable) {
return new ComputationalFailure < T, Exception > (throwable);
}
}
public ComputationalTry < T > bind(ITransformer < T > transformer) {
if (result.isSuccess()) {
ComputationlResult < T > resultAfterTransf = this.applyTransformer(result.getResult(), transformer);
return new ComputationalTry < T > (resultAfterTransf);
} else {
return new ComputationalTry < T > (result);
}
}
public ComputationlResult < T > getResult() {
return this.result;
}
}
public class ComputationalFailure < T, E extends Throwable > implements ComputationlResult < T > {
public ComputationalFailure(E exception) {
this.exception = exception;
}
final private E exception;
#Override
public T getResult() {
return null;
}
#Override
public E getError() {
return exception;
}
#Override
public boolean isSuccess() {
return false;
}
}
public class ComputationalSuccess < T > implements ComputationlResult < T > {
public ComputationalSuccess(T result) {
this.result = result;
}
final private T result;
#Override
public T getResult() {
return result;
}
#Override
public Throwable getError() {
return null;
}
#Override
public boolean isSuccess() {
return true;
}
}
public interface ComputationlResult < T > {
T getResult();
< E extends Throwable > E getError();
boolean isSuccess();
}
public interface ITransformer < T > {
public T transform(T t);
}
public class Test {
public static void main(String[] args) {
ITransformer < String > t0 = new ITransformer < String > () {#
Override
public String transform(String t) {
//return t + t;
throw new RuntimeException("some exception 1");
}
};
ITransformer < String > t1 = new ITransformer < String > () {#
Override
public String transform(String t) {
return "<" + t + ">";
//throw new RuntimeException("some exception 2");
}
};
ComputationlResult < String > res = ComputationalTry.initComputation("1").bind(t0).bind(t1).getResult();
System.out.println(res);
if (res.isSuccess()) {
System.out.println(res.getResult());
} else {
System.out.println(res.getError());
}
}
}
I hope this might shade some light.

#Misha is onto something. Obviously you wouldn't do this exact thing in real code, but CompletableFuture provides Haskell-style monads like this:
return maps to CompletableFuture.completedFuture
>= maps to thenCompose
So you could rewrite #Misha's example like this:
CompletableFuture.completedFuture(new Scanner(System.in)).thenCompose(scanner ->
CompletableFuture.completedFuture(scanner.nextInt()).thenCompose(divident ->
CompletableFuture.completedFuture(scanner.nextInt()).thenCompose(divisor ->
CompletableFuture.completedFuture(divident / divisor).thenCompose(val -> {
System.out.printf("%s/%s = %s%n", divident, divisor, val);
return null;
}))));
which maps to the Haskell-ish:
(return (newScanner SystemIn)) >>= \scanner ->
(return (nextInt scanner)) >>= \divident ->
(return (nextInt scanner)) >>= \divisor ->
(return (divident / divisor)) >>= \val -> do
SystemOutPrintf "%s/%s = %s%n" divident divisor val
return Null
or with do syntax
do
scanner <- return (newScanner SystemIn)
divident <- return (nextInt scanner)
divisor <- return (nextInt scanner)
val <- return (divident / divisor)
do
SystemOutPrintf "%s/%s = %s%n" divident divisor val
return Null
Implementations of fmap and join
I got a little carried away. These are the standard fmap and join implemented in terms of CompletableFuture:
<T, U> CompletableFuture<U> fmap(Function<T, U> f, CompletableFuture<T> m) {
return m.thenCompose(x -> CompletableFuture.completedFuture(f.apply(x)));
}
<T> CompletableFuture<T> join(CompletableFuture<CompletableFuture<T>> n) {
return n.thenCompose(x -> x);
}

Related

Spring Error while using filter and wrapper

I'm using the filter to check user rights.
Problem in comparing session value to param value is occurred and resolution load is applied using wrapper.
However, the following error message came out.
List<Map<String,Object>> loginInfo = (List<Map<String,Object>>)session.getAttribute("loginSession");
if loginInfo.get(0).get("user_type").equals("1") || loginInfo.get(0).get("user_type").equals("2"))
{
chain.doFilter(req, res);
}
else
{
RereadableRequestWrapper wrapperRequest = new RereadableRequestWrapper(request);
String requestBody= IOUtils.toString(wrapperRequest.getInputStream(), "UTF-8");
Enumeration<String> reqeustNames = request.getParameterNames();
if(requestBody == null) {
}
Map<String,Object> param_map = new ObjectMapper().readValue(requestBody, HashMap.class);
String userId_param = String.valueOf(param_map.get("customer_id"));
System.out.println(userId_param);
if( userId_param == null || userId_param.isEmpty()) {
logger.debug("error, customer_id error");
}
if (!loginInfo.get(0).get("customer_id").equals(userId_param))
{
logger.debug("error, customer_id error");
}
chain.doFilter(wrapperRequest, res);
}
/////////////////////////
here is my wrapper Code.
private boolean parametersParsed = false;
private final Charset encoding;
private final byte[] rawData;
private final Map<String, ArrayList<String>> parameters = new LinkedHashMap<String, ArrayList<String>>();
ByteChunk tmpName = new ByteChunk();
ByteChunk tmpValue = new ByteChunk();
private class ByteChunk {
private byte[] buff;
private int start = 0;
private int end;
public void setByteChunk(byte[] b, int off, int len) {
buff = b;
start = off;
end = start + len;
}
public byte[] getBytes() {
return buff;
}
public int getStart() {
return start;
}
public int getEnd() {
return end;
}
public void recycle() {
buff = null;
start = 0;
end = 0;
}
}
public RereadableRequestWrapper(HttpServletRequest request) throws IOException {
super(request);
String characterEncoding = request.getCharacterEncoding();
if (StringUtils.isBlank(characterEncoding)) {
characterEncoding = StandardCharsets.UTF_8.name();
}
this.encoding = Charset.forName(characterEncoding);
// Convert InputStream data to byte array and store it to this wrapper instance.
try {
InputStream inputStream = request.getInputStream();
this.rawData = IOUtils.toByteArray(inputStream);
} catch (IOException e) {
throw e;
}
}
#Override
public ServletInputStream getInputStream() throws IOException {
final ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(this.rawData);
ServletInputStream servletInputStream = new ServletInputStream() {
public int read() throws IOException {
return byteArrayInputStream.read();
}
#Override
public boolean isFinished() {
// TODO Auto-generated method stub
return false;
}
#Override
public boolean isReady() {
// TODO Auto-generated method stub
return false;
}
#Override
public void setReadListener(ReadListener listener) {
// TODO Auto-generated method stub
}
};
return servletInputStream;
}
#Override
public BufferedReader getReader() throws IOException {
return new BufferedReader(new InputStreamReader(this.getInputStream(), this.encoding));
}
#Override
public ServletRequest getRequest() {
return super.getRequest();
}
#Override
public String getParameter(String name) {
if (!parametersParsed) {
parseParameters();
}
ArrayList<String> values = this.parameters.get(name);
if (values == null || values.size() == 0)
return null;
return values.get(0);
}
public HashMap<String, String[]> getParameters() {
if (!parametersParsed) {
parseParameters();
}
HashMap<String, String[]> map = new HashMap<String, String[]>(this.parameters.size() * 2);
for (String name : this.parameters.keySet()) {
ArrayList<String> values = this.parameters.get(name);
map.put(name, values.toArray(new String[values.size()]));
}
return map;
}
#SuppressWarnings("rawtypes")
#Override
public Map getParameterMap() {
return getParameters();
}
#SuppressWarnings("rawtypes")
#Override
public Enumeration getParameterNames() {
return new Enumeration<String>() {
#SuppressWarnings("unchecked")
private String[] arr = (String[])(getParameterMap().keySet().toArray(new String[0]));
private int index = 0;
#Override
public boolean hasMoreElements() {
return index < arr.length;
}
#Override
public String nextElement() {
return arr[index++];
}
};
}
#Override
public String[] getParameterValues(String name) {
if (!parametersParsed) {
parseParameters();
}
ArrayList<String> values = this.parameters.get(name);
String[] arr = values.toArray(new String[values.size()]);
if (arr == null) {
return null;
}
return arr;
}
private void parseParameters() {
parametersParsed = true;
if (!("application/x-www-form-urlencoded".equalsIgnoreCase(super.getContentType()))) {
return;
}
int pos = 0;
int end = this.rawData.length;
while (pos < end) {
int nameStart = pos;
int nameEnd = -1;
int valueStart = -1;
int valueEnd = -1;
boolean parsingName = true;
boolean decodeName = false;
boolean decodeValue = false;
boolean parameterComplete = false;
do {
switch (this.rawData[pos]) {
case '=':
if (parsingName) {
// Name finished. Value starts from next character
nameEnd = pos;
parsingName = false;
valueStart = ++pos;
} else {
// Equals character in value
pos++;
}
break;
case '&':
if (parsingName) {
// Name finished. No value.
nameEnd = pos;
} else {
// Value finished
valueEnd = pos;
}
parameterComplete = true;
pos++;
break;
case '%':
case '+':
// Decoding required
if (parsingName) {
decodeName = true;
} else {
decodeValue = true;
}
pos++;
break;
default:
pos++;
break;
}
} while (!parameterComplete && pos < end);
if (pos == end) {
if (nameEnd == -1) {
nameEnd = pos;
} else if (valueStart > -1 && valueEnd == -1) {
valueEnd = pos;
}
}
if (nameEnd <= nameStart) {
continue;
// ignore invalid chunk
}
tmpName.setByteChunk(this.rawData, nameStart, nameEnd - nameStart);
if (valueStart >= 0) {
tmpValue.setByteChunk(this.rawData, valueStart, valueEnd - valueStart);
} else {
tmpValue.setByteChunk(this.rawData, 0, 0);
}
try {
String name;
String value;
if (decodeName) {
name = new String(URLCodec.decodeUrl(Arrays.copyOfRange(tmpName.getBytes(), tmpName.getStart(), tmpName.getEnd())), this.encoding);
} else {
name = new String(tmpName.getBytes(), tmpName.getStart(), tmpName.getEnd() - tmpName.getStart(), this.encoding);
}
if (valueStart >= 0) {
if (decodeValue) {
value = new String(URLCodec.decodeUrl(Arrays.copyOfRange(tmpValue.getBytes(), tmpValue.getStart(), tmpValue.getEnd())), this.encoding);
} else {
value = new String(tmpValue.getBytes(), tmpValue.getStart(), tmpValue.getEnd() - tmpValue.getStart(), this.encoding);
}
} else {
value = "";
}
if (StringUtils.isNotBlank(name)) {
ArrayList<String> values = this.parameters.get(name);
if (values == null) {
values = new ArrayList<String>(1);
this.parameters.put(name, values);
}
if (StringUtils.isNotBlank(value)) {
values.add(value);
}
}
} catch (DecoderException e) {
// ignore invalid chunk
}
tmpName.recycle();
tmpValue.recycle();
}
}
and Error Message is com.fasterxml.jackson.databind.JsonMappingException: No content to map due to end-of-input
I Don't know why this problem happened...

How to skip even lines of a Stream<String> obtained from the Files.lines

In this case just odd lines have meaningful data and there is no character that uniquely identifies those lines. My intention is to get something equivalent to the following example:
Stream<DomainObject> res = Files.lines(src)
.filter(line -> isOddLine())
.map(line -> toDomainObject(line))
Is there any “clean” way to do it, without sharing global state?
No, there's no way to do this conveniently with the API. (Basically the same reason as to why there is no easy way of having a zipWithIndex, see Is there a concise way to iterate over a stream with indices in Java 8?).
You can still use Stream, but go for an iterator:
Iterator<String> iter = Files.lines(src).iterator();
while (iter.hasNext()) {
iter.next(); // discard
toDomainObject(iter.next()); // use
}
(You might want to use try-with-resource on that stream though.)
A clean way is to go one level deeper and implement a Spliterator. On this level you can control the iteration over the stream elements and simply iterate over two items whenever the downstream requests one item:
public class OddLines<T> extends Spliterators.AbstractSpliterator<T>
implements Consumer<T> {
public static <T> Stream<T> oddLines(Stream<T> source) {
return StreamSupport.stream(new OddLines(source.spliterator()), false);
}
private static long odd(long l) { return l==Long.MAX_VALUE? l: (l+1)/2; }
Spliterator<T> originalLines;
OddLines(Spliterator<T> source) {
super(odd(source.estimateSize()), source.characteristics());
originalLines=source;
}
#Override
public boolean tryAdvance(Consumer<? super T> action) {
if(originalLines==null || !originalLines.tryAdvance(action))
return false;
if(!originalLines.tryAdvance(this)) originalLines=null;
return true;
}
#Override
public void accept(T t) {}
}
Then you can use it like
Stream<DomainObject> res = OddLines.oddLines(Files.lines(src))
.map(line -> toDomainObject(line));
This solution has no side effects and retains most advantages of the Stream API like the lazy evaluation. However, it should be clear that it hasn’t a useful semantics for unordered stream processing (beware about the subtle aspects like using forEachOrdered rather than forEach when performing a terminal action on all elements) and while supporting parallel processing in principle, it’s unlikely to be very efficient…
As aioobe said, there isn't a convenient way to do this, but there are several inconvenient ways. :-)
Here's another spliterator-based approach. Unlike Holger's, which wraps another spliterator, this one does the I/O itself. This gives greater control over things like ordering, but it also means that it has to deal with IOException and close handling. I also threw in a Predicate parameter that lets you get a crack at which lines get passed through.
static class LineSpliterator extends Spliterators.AbstractSpliterator<String>
implements AutoCloseable {
final BufferedReader br;
final LongPredicate pred;
long count = 0L;
public LineSpliterator(Path path, LongPredicate pred) throws IOException {
super(Long.MAX_VALUE, Spliterator.ORDERED);
br = Files.newBufferedReader(path);
this.pred = pred;
}
#Override
public boolean tryAdvance(Consumer<? super String> action) {
try {
String s;
while ((s = br.readLine()) != null) {
if (pred.test(++count)) {
action.accept(s);
return true;
}
}
return false;
} catch (IOException ioe) {
throw new UncheckedIOException(ioe);
}
}
#Override
public void close() {
try {
br.close();
} catch (IOException ioe) {
throw new UncheckedIOException(ioe);
}
}
public static Stream<String> lines(Path path, LongPredicate pred) throws IOException {
LineSpliterator ls = new LineSpliterator(path, pred);
return StreamSupport.stream(ls, false)
.onClose(() -> ls.close());
}
}
You'd use it within a try-with-resources to ensure that the file is closed, even if an exception occurs:
static void printOddLines() throws IOException {
try (Stream<String> lines = LineSpliterator.lines(PATH, x -> (x & 1L) == 1L)) {
lines.forEach(System.out::println);
}
}
You can do this with a custom spliterator:
public class EvenOdd {
public static final class EvenSpliterator<T> implements Spliterator<T> {
private final Spliterator<T> underlying;
boolean even;
public EvenSpliterator(Spliterator<T> underlying, boolean even) {
this.underlying = underlying;
this.even = even;
}
#Override
public boolean tryAdvance(Consumer<? super T> action) {
if (even) {
even = false;
return underlying.tryAdvance(action);
}
if (!underlying.tryAdvance(t -> {})) {
return false;
}
return underlying.tryAdvance(action);
}
#Override
public Spliterator<T> trySplit() {
if (!hasCharacteristics(SUBSIZED)) {
return null;
}
final Spliterator<T> newUnderlying = underlying.trySplit();
if (newUnderlying == null) {
return null;
}
final boolean oldEven = even;
if ((newUnderlying.estimateSize() & 1) == 1) {
even = !even;
}
return new EvenSpliterator<>(newUnderlying, oldEven);
}
#Override
public long estimateSize() {
return underlying.estimateSize()>>1;
}
#Override
public int characteristics() {
return underlying.characteristics();
}
}
public static void main(String[] args) {
final EvenSpliterator<Integer> spliterator = new EvenSpliterator<>(IntStream.range(1, 100000).parallel().mapToObj(Integer::valueOf).spliterator(), false);
final List<Integer> result = StreamSupport.stream(spliterator, true).parallel().collect(Collectors.toList());
final List<Integer> expected = IntStream.range(1, 100000 / 2).mapToObj(i -> i * 2).collect(Collectors.toList());
if (result.equals(expected)) {
System.out.println("Yay! Expected result.");
}
}
}
Following the #aioobe algorithm, here's another spliterator-based approach, as proposed by #Holger but more concise, even if less effective.
public static <T> Stream<T> filterOdd(Stream<T> src) {
Spliterator<T> iter = src.spliterator();
AbstractSpliterator<T> res = new AbstractSpliterator<T>(Long.MAX_VALUE, Spliterator.ORDERED)
{
#Override
public boolean tryAdvance(Consumer<? super T> action) {
iter.tryAdvance(item -> {}); // discard
return iter.tryAdvance(action); // use
}
};
return StreamSupport.stream(res, false);
}
Then you can use it like
Stream<DomainObject> res = Files.lines(src)
filterOdd(res)
.map(line -> toDomainObject(line))

Queue data structure requiring K accesses before removal

I need a specialized queue-like data structure. It can be used by multiple consumers, but each item in queue must be removed from queue after k consumers read it.
Is there any production ready implementation? Or Should I implement a queue with read-counter in each item, and handle item removal myself?
Thanks in advance.
I think this is what you are looking for. Derived from the source code for BlockingQueue. Caveat emptor, not tested.
I tried to find a way to wrap Queue, but Queue doesn't expose its concurrency members, so you can't get the right semantics.
public class CountingQueue<E> {
private class Entry {
Entry(int count, E element) {
this.count = count;
this.element = element;
}
int count;
E element;
}
public CountingQueue(int capacity) {
if (capacity <= 0) {
throw new IllegalArgumentException();
}
this.items = new Object[capacity];
this.lock = new ReentrantLock(false);
this.condition = this.lock.newCondition();
}
private final ReentrantLock lock;
private final Condition condition;
private final Object[] items;
private int takeIndex;
private int putIndex;
private int count;
final int inc(int i) {
return (++i == items.length) ? 0 : i;
}
final int dec(int i) {
return ((i == 0) ? items.length : i) - 1;
}
private static void checkNotNull(Object v) {
if (v == null)
throw new NullPointerException();
}
/**
* Inserts element at current put position, advances, and signals.
* Call only when holding lock.
*/
private void insert(int count, E x) {
items[putIndex] = new Entry(count, x);
putIndex = inc(putIndex);
if (count++ == 0) {
// empty to non-empty
condition.signal();
}
}
private E extract() {
Entry entry = (Entry)items[takeIndex];
if (--entry.count <= 0) {
items[takeIndex] = null;
takeIndex = inc(takeIndex);
if (count-- == items.length) {
// full to not-full
condition.signal();
}
}
return entry.element;
}
private boolean waitNotEmpty(long timeout, TimeUnit unit) throws InterruptedException {
long nanos = unit.toNanos(timeout);
while (count == 0) {
if (nanos <= 0) {
return false;
}
nanos = this.condition.awaitNanos(nanos);
}
return true;
}
private boolean waitNotFull(long timeout, TimeUnit unit) throws InterruptedException {
long nanos = unit.toNanos(timeout);
while (count == items.length) {
if (nanos <= 0)
return false;
nanos = condition.awaitNanos(nanos);
}
return true;
}
public boolean put(int count, E e) {
checkNotNull(e);
final ReentrantLock localLock = this.lock;
localLock.lock();
try {
if (count == items.length)
return false;
else {
insert(count, e);
return true;
}
} finally {
localLock.unlock();
}
}
public boolean put(int count, E e, long timeout, TimeUnit unit)
throws InterruptedException {
checkNotNull(e);
final ReentrantLock localLock = this.lock;
localLock.lockInterruptibly();
try {
if (!waitNotFull(timeout, unit)) {
return false;
}
insert(count, e);
return true;
} finally {
localLock.unlock();
}
}
public E get() {
final ReentrantLock localLock = this.lock;
localLock.lock();
try {
return (count == 0) ? null : extract();
} finally {
localLock.unlock();
}
}
public E get(long timeout, TimeUnit unit) throws InterruptedException {
final ReentrantLock localLock = this.lock;
localLock.lockInterruptibly();
try {
if (waitNotEmpty(timeout, unit)) {
return extract();
} else {
return null;
}
} finally {
localLock.unlock();
}
}
public int size() {
final ReentrantLock localLock = this.lock;
localLock.lock();
try {
return count;
} finally {
localLock.unlock();
}
}
public boolean isEmpty() {
final ReentrantLock localLock = this.lock;
localLock.lock();
try {
return count == 0;
} finally {
localLock.unlock();
}
}
public int remainingCapacity() {
final ReentrantLock lock= this.lock;
lock.lock();
try {
return items.length - count;
} finally {
lock.unlock();
}
}
public boolean isFull() {
final ReentrantLock localLock = this.lock;
localLock.lock();
try {
return items.length - count == 0;
} finally {
localLock.unlock();
}
}
public void clear() {
final ReentrantLock localLock = this.lock;
localLock.lock();
try {
for (int i = takeIndex, k = count; k > 0; i = inc(i), k--)
items[i] = null;
count = 0;
putIndex = 0;
takeIndex = 0;
condition.signalAll();
} finally {
localLock.unlock();
}
}
}
A memory efficient way that retains the info you need:
Each queue entry becomes a
Set<ConsumerID>
so that you ensure the k times are for k distinct consumers: your app logic checks if the
set.size()==k
and removes it from queue in that case.
In terms of storage: you will have tradeoffs of which Set implementation based on
size and type of the ConsumerID
speed of retrieval requirement
E.g if k is very small and your queue retrieval logic has access to a
Map<ID,ConsumerId>
then you could have simply an Int or even a Short or Byte depending on # distinct ConsumerID's and possibly store in an Array . This is slower than accessing a set since it would be traversed linearly - but for small K that may be reasonable.

Storm Trident 'average aggregator

I am a newbie to Trident and I'm looking to create an 'Average' aggregator similar to 'Sum(), but for 'Average'.The following does not work:
public class Average implements CombinerAggregator<Long>.......{
public Long init(TridentTuple tuple)
{
(Long)tuple.getValue(0);
}
public Long Combine(long val1,long val2){
return val1+val2/2;
}
public Long zero(){
return 0L;
}
}
It may not be exactly syntactically correct, but that's the idea. Please help if you can. Given 2 tuples with values [2,4,1] and [2,2,5] and fields 'a','b' and 'c' and doing an average on field 'b' should return '3'. I'm not entirely sure how init() and zero() work.
Thank you so much for your help in advance.
Eli
public class Average implements CombinerAggregator<Number> {
int count = 0;
double sum = 0;
#Override
public Double init(final TridentTuple tuple) {
this.count++;
if (!(tuple.getValue(0) instanceof Double)) {
double d = ((Number) tuple.getValue(0)).doubleValue();
this.sum += d;
return d;
}
this.sum += (Double) tuple.getValue(0);
return (Double) tuple.getValue(0);
}
#Override
public Double combine(final Number val1, final Number val2) {
return this.sum / this.count;
}
#Override
public Double zero() {
this.sum = 0;
this.count = 0;
return 0D;
}
}
I am a complete newbie when it comes to Trident as well, and so I'm not entirely if the following will work. But it might:
public class AvgAgg extends BaseAggregator<AvgState> {
static class AvgState {
long count = 0;
long total = 0;
double getAverage() {
return total/count;
}
}
public AvgState init(Object batchId, TridentCollector collector) {
return new AvgState();
}
public void aggregate(AvgState state, TridentTuple tuple, TridentCollector collector) {
state.count++;
state.total++;
}
public void complete(AvgState state, TridentCollector collector) {
collector.emit(new Values(state.getAverage()));
}
}

Mybatis Custom Type handler: call FileInputStream.close() after query being executed

I am trying to implement MyBatis custom type handler for File using FileInputStream.
here is my code for setting:
#MappedJdbcTypes(JdbcType.LONGVARBINARY)
public class FileByteaHandler extends BaseTypeHandler<File> {
#Override
public void setNonNullParameter(PreparedStatement ps, int i, File file, JdbcType jdbcType) throws SQLException{
try {
FileInputStream fis = new FileInputStream(file);
ps.setBinaryStream(1, fis, (int) file.length());
} catch(FileNotFoundException ex) {
Logger.getLogger(FileByteaHandler.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
My question is:
I can not close this FileInputStream at the end of this method, otherwise MyBatis will not be able to read the data from it. In fact, I do not know where I can close the FileInputStream. Is there a way to call close() after the query being excuted in MyBatis.
Thanks in advance,
UPDATE
Thanks for Jarandinor's help. Here is my code for this type handler. and hopefully it can help someone:
#MappedJdbcTypes(JdbcType.LONGVARBINARY)
public class FileByteaHandler extends BaseTypeHandler<File> {
#Override
public void setNonNullParameter(PreparedStatement ps, int i, File file, JdbcType jdbcType) throws SQLException {
try {
AutoCloseFileInputStream fis = new AutoCloseFileInputStream(file);
ps.setBinaryStream(1, fis, (int) file.length());
} catch(FileNotFoundException ex) {
Logger.getLogger(FileByteaHandler.class.getName()).log(Level.SEVERE, null, ex);
}
}
#Override
public File getNullableResult(ResultSet rs, String columnName) throws SQLException {
File file = null;
try(InputStream input = rs.getBinaryStream(columnName)) {
file = getResult(rs, input);
} catch(IOException e) {
System.out.println(e.getMessage());
}
return file;
}
public File creaetFile() {
File file = new File("e:/target-file"); //your temp file path
return file;
}
private File getResult(ResultSet rs, InputStream input) throws SQLException {
File file = creaetFile();
try(OutputStream output = new FileOutputStream(file)) {
int bufSize = 0x8000000;
byte buf[] = new byte[bufSize];
int s = 0;
int tl = 0;
while( (s = input.read(buf, 0, bufSize)) > 0 ) {
output.write(buf, 0, s);
tl += s;
}
output.flush();
} catch(IOException e) {
System.out.println(e.getMessage());
}
return file;
}
#Override
public File getNullableResult(ResultSet rs, int columnIndex) throws SQLException {
File file = null;
try(InputStream input = rs.getBinaryStream(columnIndex)) {
file = getResult(rs, input);
} catch(IOException e) {
System.out.println(e.getMessage());
}
return file;
}
#Override
public File getNullableResult(CallableStatement cs, int columnIndex) throws SQLException {
throw new SQLException("getNullableResult(CallableStatement cs, int columnIndex) is called");
}
private class AutoCloseFileInputStream extends FileInputStream {
public AutoCloseFileInputStream(File file) throws FileNotFoundException {
super(file);
}
#Override
public int read() throws IOException {
int c = super.read();
if(available() <= 0) {
close();
}
return c;
}
public int read(byte[] b) throws IOException {
int c = super.read(b);
if(available() <= 0) {
close();
}
return c;
}
public int read(byte[] b, int off, int len) throws IOException {
int c = super.read(b, off, len);
if(available() <= 0) {
close();
}
return c;
}
}
}
public AutoCloseFileInputStream(File file) throws FileNotFoundException {
super(file);
}
#Override
public int read() throws IOException {
int c = super.read();
if( c == -1 ) {
close();
}
return c;
}
public int read(byte[] b) throws IOException {
int c = super.read(b);
if( c == -1 ) {
close();
}
return c;
}
public int read(byte[] b, int off, int len) throws IOException {
int c = super.read(b, off, len);
if(available() <= 0) {
close();
}
return c;
}
}
I don't know a good way to close stream after query execution.
Method 1:
read the file to byte []
(note: in jdk 7 you can use Files.readAllBytes(Paths.get(file.getPath()));)
and use:
ps.setBytes(i, bytes);
2: or create your own class inherited from FileInputStream and override public native int read() throws IOException; method, when the end of the file is reached, close the stream:
#Override
public int read() throws IOException {
int c = super.read();
if(c == -1) {
super.close();
}
return c;
}
Maybe you should override and public int read(byte[] b) throws IOException too,
it's depends on the jdbc implementation.
3: you can change your FileByteaHandler:
1) add list of FileInputStream field;
2) put opened InputStream to that list in setNonNullParameter;
3) add closeStreams() method, where you close and remove all InputStream from list.
And invoke this method after you have invoked your mapper method: session.getConfiguration().getTypeHandlerRegistry().getMappingTypeHandler(FileByteaHandler.class).closeStreams();
Or use mybatis plugin system to run above command.

Resources