Dynamic JavaFX buttons action [duplicate] - events

I want to be able to do something like this:
for(i = 0; i < 10; i++) {
//if any button in the array is pressed, disable it.
button[i].setOnAction( ae -> { button[i].setDisable(true) } );
}
However, I get a error saying "local variables referenced from a lambda expression must be final or effectively final". How might I still do something like the code above (if it is even possible)? If it can't be done, what should be done instead to get a similar result?

As the error message says, local variables referenced from a lambda expression must be final or effectively final ("effectively final" meaning the compiler can make it final for you).
Simple workaround:
for(i = 0; i < 10; i++) {
final int ii = i;
button[i].setOnAction( ae -> { button[ii].setDisable(true) } );
}

Since you are using lambdas, you can benefit also from other features of Java 8, like streams.
For instance, IntStream:
A sequence of primitive int-valued elements supporting sequential and parallel aggregate operations. This is the int primitive specialization of Stream.
can be used to replace the for loop:
IntStream.range(0,10).forEach(i->{...});
so now you have an index that can be used to your purpose:
IntStream.range(0,10)
.forEach(i->button[i].setOnAction(ea->button[i].setDisable(true)));
Also you can generate a stream from an array:
Stream.of(button).forEach(btn->{...});
In this case you won't have an index, so as #shmosel suggests, you can use the source of the event:
Stream.of(button)
.forEach(btn->btn.setOnAction(ea->((Button)ea.getSource()).setDisable(true)));
EDIT
As #James_D suggests, there's no need of downcasting here:
Stream.of(button)
.forEach(btn->btn.setOnAction(ea->btn.setDisable(true)));
In both cases, you can also benefit from parallel operations:
IntStream.range(0,10).parallel()
.forEach(i->button[i].setOnAction(ea->button[i].setDisable(true)));
Stream.of(button).parallel()
.forEach(btn->btn.setOnAction(ea->btn.setDisable(true)));

Use the Event to get the source Node.
for(int i = 0; i < button.length; i++)
{
button[i].setOnAction(event ->{
((Button)event.getSource()).setDisable(true);
});
}

Lambda expressions are effectively like an annonymous method which works on stream. In order to avoid any unsafe operations, Java has made that no external variables which can be modified, can be accessed in a lambda expression.
In order to work around it,
final int index=button[i];
And use index instead of i inside your lambda expression.

You say If the button is pressed, but in your example all the buttons in the list will be disabled. Try to associate a listener to each button rather than just disable it.
For the logic, do you mean something like that :
Arrays.asList(buttons).forEach(
button -> button.addActionListener(new ActionListener() {
#Override
public void actionPerformed(ActionEvent e) {
button.setEnabled(false);
}
}));
I Also like Sedrick's answer but you have to add an action listener inside the loop .

Related

Removing a std::function<()> from a vector c++

I'm building a publish-subscribe class (called SystermInterface), which is responsible to receive updates from its instances, and publish them to subscribers.
Adding a subscriber callback function is trivial and has no issues, but removing it yields an error, because std::function<()> is not comparable in C++.
std::vector<std::function<void()> subs;
void subscribe(std::function<void()> f)
{
subs.push_back(f);
}
void unsubscribe(std::function<void()> f)
{
std::remove(subs.begin(), subs.end(), f); // Error
}
I've came down to five solutions to this error:
Registering the function using a weak_ptr, where the subscriber must keep the returned shared_ptr alive.
Solution example at this link.
Instead of registering at a vector, map the callback function by a custom key, unique per callback function.
Solution example at this link
Using vector of function pointers. Example
Make the callback function comparable by utilizing the address.
Use an interface class (parent class) to call a virtual function.
In my design, all intended classes inherits a parent class called
ServiceCore, So instead of registering a callback function, just
register ServiceCore reference in the vector.
Given that the SystemInterface class has a field attribute per instance (ID) (Which is managed by ServiceCore, and supplied to SystemInterface by constructing a ServiceCore child instance).
To my perspective, the first solution is neat and would work, but it requires handling at subscribers, which is something I don't really prefer.
The second solution would make my implementation more complex, where my implementation looks as:
using namespace std;
enum INFO_SUB_IMPORTANCE : uint8_t
{
INFO_SUB_PRIMARY, // Only gets the important updates.
INFO_SUB_COMPLEMENTARY, // Gets more.
INFO_SUB_ALL // Gets all updates
};
using CBF = function<void(string,string)>;
using INFO_SUBTREE = map<INFO_SUB_IMPORTANCE, vector<CBF>>;
using REQINF_SUBS = map<string, INFO_SUBTREE>; // It's keyed by an iterator, explaining it goes out of the question scope.
using INFSRC_SUBS = map<string, INFO_SUBTREE>;
using WILD_SUBS = INFO_SUBTREE;
REQINF_SUBS infoSubrs;
INFSRC_SUBS sourceSubrs;
WILD_SUBS wildSubrs;
void subscribeInfo(string info, INFO_SUB_IMPORTANCE imp, CBF f) {
infoSubrs[info][imp].push_back(f);
}
void subscribeSource(string source, INFO_SUB_IMPORTANCE imp, CBF f) {
sourceSubrs[source][imp].push_back(f);
}
void subscribeWild(INFO_SUB_IMPORTANCE imp, CBF f) {
wildSubrs[imp].push_back(f);
}
The second solution would require INFO_SUBTREE to be an extended map, but can be keyed by an ID:
using KEY_T = uint32_t; // or string...
using INFO_SUBTREE = map<INFO_SUB_IMPORTANCE, map<KEY_T,CBF>>;
For the third solution, I'm not aware of the limitations given by using function pointers, and the consequences of the fourth solution.
The Fifth solution would eliminate the purpose of dealing with CBFs, but it'll be more complex at subscriber-side, where a subscriber is required to override the virtual function and so receives all updates at one place, in which further requires filteration of the message id and so direct the payload to the intended routines using multiple if/else blocks, which will increase by increasing subscriptions.
What I'm looking for is an advice for the best available option.
Regarding your proposed solutions:
That would work. It can be made easy for the caller: have subscribe() create the shared_ptr and corresponding weak_ptr objects, and let it return the shared_ptr.
Then the caller must not lose the key. In a way this is similar to the above.
This of course is less generic, and then you can no longer have (the equivalent of) captures.
You can't: there is no way to get the address of the function stored inside a std::function. You can do &f inside subscribe() but that will only give you the address of the local variable f, which will go out of scope as soon as you return.
That works, and is in a way similar to 1 and 2, although now the "key" is provided by the caller.
Options 1, 2 and 5 are similar in that there is some other data stored in subs that refers to the actual std::function: either a std::shared_ptr, a key or a pointer to a base class. I'll present option 6 here, which is kind of similar in spirit but avoids storing any extra data:
Store a std::function<void()> directly, and return the index in the vector where it was stored. When removing an item, don't std::remove() it, but just set it to std::nullptr. Next time subscribe() is called, it checks if there is an empty element in the vector and reuses it:
std::vector<std::function<void()> subs;
std::size_t subscribe(std::function<void()> f) {
if (auto it = std::find(subs.begin(), subs.end(), std::nullptr); it != subs.end()) {
*it = f;
return std::distance(subs.begin(), it);
} else {
subs.push_back(f);
return subs.size() - 1;
}
}
void unsubscribe(std::size_t index) {
subs[index] = std::nullptr;
}
The code that actually calls the functions stored in subs must now of course first check against std::nullptrs. The above works because std::nullptr is treated as the "empty" function, and there is an operator==() overload that can check a std::function against std::nullptr, thus making std::find() work.
One drawback of option 6 as shown above is that a std::size_t is a rather generic type. To make it safer, you might wrap it in a class SubscriptionHandle or something like that.
As for the best solution: option 1 is quite heavy-weight. Options 2 and 5 are very reasonable, but 6 is, I think, the most efficient.

How to return the count, while using nested foreach loops in the stream

I am using java8 streams to iterate two lists, In that one list contains some custom objects and another contains string.
With this, I have to call a method by passing custom object and sting as a input and then I have to get the count.
This is what I tried:
public int returnCode() {
/*int count = 0;
* list.forEach(x -> {
list2.forEach(p -> {
count+ = myDao.begin(conn, x.getCode(), p);
});
return count;
});*/
}
compiler is giving an error that count should be final.
Can anyone, give me how to do this in a better way.
What you're attempting to do is not possible as local variables accessed from a lambda must be final or effectively final i.e. any variable whose value does not change.
You're attempting to change the value of count in the lambda passed to the forEach hence the compilation error.
To replicate your exact code using the stream API, it would be:
int count = list.stream()
.limit(1)
.flatMapToInt(x -> list2.stream().mapToInt(p -> myDao.begin(conn, x.getCode(), p)))
.sum();
However, if you want to iterate over the entire sequence in list and not just the first then you can proceed with the following:
int count = list.stream()
.flatMapToInt(x -> list2.stream().mapToInt(p -> myDao.begin(conn, x.getCode(), p)))
.sum();
Lambdas mainly substitutes anonymous inner classes. Inside an anonymous inner class you can access only final local variables. Hence the same holds true with lambda expressions. Local variable is copied when JVM creates a lambda instance, hence it is counter intuitive to allow any update to them. So declaring the variable as final would solve the issue. But if you make it final you won't be able to do this, leading to another pitfall.
count+ = myDao.begin(conn, x.getCode(), p);
So your solution is not good and does not comply with lambda. So this will be a one way of doing it.
final int count = customObjects.stream()
.mapToInt(co -> strings.stream().mapToInt(s -> myDao.begin(conn, co.getCode(), s)).sum())
.sum();

Why filter with side effects performs better than a Spliterator based implementation?

Regarding the question How to skip even lines of a Stream obtained from the Files.lines I followed the accepted answer approach implementing my own filterEven() method based on Spliterator<T> interface, e.g.:
public static <T> Stream<T> filterEven(Stream<T> src) {
Spliterator<T> iter = src.spliterator();
AbstractSpliterator<T> res = new AbstractSpliterator<T>(Long.MAX_VALUE, Spliterator.ORDERED)
{
#Override
public boolean tryAdvance(Consumer<? super T> action) {
iter.tryAdvance(item -> {}); // discard
return iter.tryAdvance(action); // use
}
};
return StreamSupport.stream(res, false);
}
which I can use in the following way:
Stream<DomainObject> res = Files.lines(src)
filterEven(res)
.map(line -> toDomainObject(line))
However measuring the performance of this approach against the next one which uses a filter() with side effects I noticed that the next one performs better:
final int[] counter = {0};
final Predicate<String> isEvenLine = item -> ++counter[0] % 2 == 0;
Stream<DomainObject> res = Files.lines(src)
.filter(line -> isEvenLine ())
.map(line -> toDomainObject(line))
I tested the performance with JMH and I am not including the file load in the benchmark. I previously load it into an array. Then each benchmark starts by creating a Stream<String> from previous array, then filtering even lines, then applying a mapToInt() to extract the value of an int field and finally a max() operation. Here it is one of the benchmarks (you can check the whole Program here and here you have the data file with about 186 lines):
#Benchmark
public int maxTempFilterEven(DataSource src){
Stream<String> content = Arrays.stream(src.data)
.filter(s-> s.charAt(0) != '#') // Filter comments
.skip(1); // Skip line: Not available
return filterEven(content) // Filter daily info and skip hourly
.mapToInt(line -> parseInt(line.substring(14, 16)))
.max()
.getAsInt();
}
I am not getting why the filter() approach has better performance (~80ops/ms) than the filterEven() (~50ops/ms)?
Intro
I think I know the reason but unfortunately I have no idea how to improve performance of Spliterator-based solution (at least without rewritting of the whole Streams API feature).
Sidenote 1: performance was not the most important design goal when Stream API was designed. If performance is critical, most probably re-writting the code without Stream API will make the code faster. (For example, Stream API unavoidably increases memory allocation and thus GC-pressure). On the other hand in most of the scenarios Stream API provides a nicer higher-level API at a cost of a relatively small performance degradation.
Part 1 or Short theoretical answer
Stream is designed to implement a kind of internal iteration as the main mean of consuming and external iteration (i.e. Spliterator-based) is an additional mean that is kind of "emulated". Thus external iteration involves some overhead. Laziness adds some limits to the efficiency of external iteration and a need to support flatMap makes it necessary to use some kind of dynamic buffer in this process.
Sidenote 2 In some cases Spliterator-based iteration might be as fast as the internal iteration (i.e. filter in this case). Particularly it is so in the cases when you create a Spliterator directly from that data-containing Stream. To see it, you can modify your tests to materialize your first filter into a Strings array:
String[] filteredData = Arrays.stream(src.data)
.filter(s-> s.charAt(0) != '#') // Filter comments
.skip(1)
.toArray(String[]::new);
and then compare preformance of maxTempFilter and maxTempFilterEven modified to accept that pre-filtered String[] filteredData. If you want to know why this is so, you probably should read the rest of this long answer or at least Part 2.
Part 2 or Longer theoretical answer:
Streams were designed to be mainly consumed as a whole by some terminal operation. Iterating elements one by one although supported is not designed as a main way to consume streams.
Note that using the "functional" Stream API such as map, flatMap, filter, reduce, and collect you can't say at some step "I have had enough data, stop iterating over the source and pushing values". You can discard some incoming data (as filter does) but can't stop iteration. (take and skip transformations are actually implemented using Spliterator inside; and anyMatch, allMatch, noneMatch, findFirst, findAny, etc. use non-public API j.u.s.Sink.cancellationRequested, also they are easier as there can't be several terminal operations). If all transformations in the pipeline are synchronous, you can combine them into a single aggregated function (Consumer) and call it in a simple loop (optionally splitting the loop execution over several thread). This is what my simplified version of the state based filter represents (see the code in the Show me some code section). It gets a bit more complicated if there is a flatMap in the pipeline but idea is still the same.
Spliterator-based transformation is fundamentally different because it adds an asynchronous consumer-driven step to the pipeline. Now the Spliterator rather than the source Stream drives the iteration process. If you ask for a Spliterator directly on the source Stream, it might be able to return you some implementation that just iterates over its internal data structure and this is why materializing pre-filtered data should remove performance difference. However, if you create a Spliterator for some non-empty pipeline, there is no other (simple) choice other than asking the source to push elements one by one through the pipeline until some element passes all the filters (see also second example in the Show me some code section). The fact that source elements are pushed one by one rather than in some batches is a consequence of the fundamental decision to make Streams lazy. The need for a buffer instead of just one element is the consequence of support for flatMap: pushing one element from the source can produce many elements for Spliterator.
Part 3 or Show me some code
This part tries to provide some backing with the code (both links to the real code and simulated code) of what was described in the "theoretical" parts.
First of all, you should know that current Streams API implementation accumulates non-terminal (intermediate) operations into a single lazy pipeline (see j.u.s.AbstractPipeline and its children such as j.u.s.ReferencePipeline. Then, when the terminal operation is applied, all the elements from the original Stream are "pushed" through the pipeline.
What you see is the result of two things:
the fact that streams pipelines are different for cases when you
have a Spliterator-based step inside.
the fact that your OddLines is not the first step in the pipeline
The code with a stateful filter is more or less similar to the following straightforward code:
static int similarToFilter(String[] data)
{
final int[] counter = {0};
final Predicate<String> isEvenLine = item -> ++counter[0] % 2 == 0;
int skip = 1;
boolean reduceEmpty = true;
int reduceState = 0;
for (String outerEl : data)
{
if (outerEl.charAt(0) != '#')
{
if (skip > 0)
skip--;
else
{
if (isEvenLine.test(outerEl))
{
int intEl = parseInt(outerEl.substring(14, 16));
if (reduceEmpty)
{
reduceState = intEl;
reduceEmpty = false;
}
else
{
reduceState = Math.max(reduceState, intEl);
}
}
}
}
}
return reduceState;
}
Note that this is effectively a single loop with some calculations (filtering/transformations) inside.
When you add a Spliterator into the pipeline on the other hand, things change significantly and even with simplifications code that is reasonably similar to what actually happens becomes much larger such as:
interface Sp<T>
{
public boolean tryAdvance(Consumer<? super T> action);
}
static class ArraySp<T> implements Sp<T>
{
private final T[] array;
private int pos;
public ArraySp(T[] array)
{
this.array = array;
}
#Override
public boolean tryAdvance(Consumer<? super T> action)
{
if (pos < array.length)
{
action.accept(array[pos]);
pos++;
return true;
}
else
{
return false;
}
}
}
static class WrappingSp<T> implements Sp<T>, Consumer<T>
{
private final Sp<T> sourceSp;
private final Predicate<T> filter;
private final ArrayList<T> buffer = new ArrayList<T>();
private int pos;
public WrappingSp(Sp<T> sourceSp, Predicate<T> filter)
{
this.sourceSp = sourceSp;
this.filter = filter;
}
#Override
public void accept(T t)
{
buffer.add(t);
}
#Override
public boolean tryAdvance(Consumer<? super T> action)
{
while (true)
{
if (pos >= buffer.size())
{
pos = 0;
buffer.clear();
sourceSp.tryAdvance(this);
}
// failed to fill buffer
if (buffer.size() == 0)
return false;
T nextElem = buffer.get(pos);
pos++;
if (filter.test(nextElem))
{
action.accept(nextElem);
return true;
}
}
}
}
static class OddLineSp<T> implements Sp<T>, Consumer<T>
{
private Sp<T> sourceSp;
public OddLineSp(Sp<T> sourceSp)
{
this.sourceSp = sourceSp;
}
#Override
public boolean tryAdvance(Consumer<? super T> action)
{
if (sourceSp == null)
return false;
sourceSp.tryAdvance(this);
if (!sourceSp.tryAdvance(action))
{
sourceSp = null;
}
return true;
}
#Override
public void accept(T t)
{
}
}
static class ReduceIntMax
{
boolean reduceEmpty = true;
int reduceState = 0;
public int getReduceState()
{
return reduceState;
}
public void accept(int t)
{
if (reduceEmpty)
{
reduceEmpty = false;
reduceState = t;
}
else
{
reduceState = Math.max(reduceState, t);
}
}
}
static int similarToSpliterator(String[] data)
{
ArraySp<String> src = new ArraySp<>(data);
int[] skip = new int[1];
skip[0] = 1;
WrappingSp<String> firstFilter = new WrappingSp<String>(src, (s) ->
{
if (s.charAt(0) == '#')
return false;
if (skip[0] != 0)
{
skip[0]--;
return false;
}
return true;
});
OddLineSp<String> oddLines = new OddLineSp<>(firstFilter);
final ReduceIntMax reduceIntMax = new ReduceIntMax();
while (oddLines.tryAdvance(s ->
{
int intValue = parseInt(s.substring(14, 16));
reduceIntMax.accept(intValue);
})) ; // do nothing in the loop body
return reduceIntMax.getReduceState();
}
This code is larger because the logic is impossible (or at least very hard) to represent without some non-trivial stateful callbacks inside the loop. Here interface Sp is a mix of j.u.s.Stream and j.u.Spliterator interfaces.
Class ArraySp represents a result of Arrays.stream.
Class WrappingSp is similar to j.u.s.StreamSpliterators.WrappingSpliterator which in the real code represents an implementation of Spliterator interface for any non-empty pipeline i.e. a Stream with at least one intermediate operation applied to it (see j.u.s.AbstractPipeline.spliterator method). In my code I merged it with a StatelessOp subclass and put there logic responsible for filter method implementation. Also for simplcity I implemented skip using filter.
OddLineSp corresponds to your OddLines and its resulting Stream
ReduceIntMax represents ReduceOps terminal operation for Math.max for int
So what's important in this example? The important thing here is that since you first filter you original stream, your OddLineSp is created from a non-empty pipeline i.e. from a WrappingSp. And if you take a closer look at WrappingSp, you'll notice that every time tryAdvance is called, it delegates the call to the sourceSp and accumulates that result(s) into a buffer. Moreover, since you have no flatMap in the pipeline, elements to the buffer will be copied one by one. I.e. every time WrappingSp.tryAdvance is called, it will call ArraySp.tryAdvance, get back exactly one element (via callback), and pass it further to the consumer provided by the caller (unless the element doesn't match the filter in which case ArraySp.tryAdvance will be called again and again but still the buffer is never filled with more than one element at a time).
Sidenote 3: If you want to look at the real code, the most intersting places are j.u.s.StreamSpliterators.WrappingSpliterator.tryAdvance which calls
j.u.s.StreamSpliterators.AbstractWrappingSpliterator.doAdvance which in turn calls j.u.s.StreamSpliterators.AbstractWrappingSpliterator.fillBuffer which in turn calls pusher that is initialized at j.u.s.StreamSpliterators.WrappingSpliterator.initPartialTraversalState
So the main thing that's hurting performance is this copying into the buffer.
Unfortunately for us, usual Java developers, current implementation of the Stream API is pretty much closed and you can't modify only some aspects of the internal behavior using inheritance or composition.
You may use some reflection-based hacking to make copying-to-buffer more efficient for your specific case and gain some performance (but sacrifice laziness of the Stream) but you can't avoid this copying altogether and thus Spliterator-based code will be slower anyway.
Going back to the example from the Sidenote #2, Spliterator-based test with materialized filteredData works faster because there is no WrappingSp in the pipeline before OddLineSp and thus there will be no copying into an intermediate buffer.

C#/MVC can I manually append multiple Enum Flags in a foreach loop?

I've seen ways of using HTML Helpers and such to deal with enums in MVC. I've taken a different approach in that I pass a string[] of the checked boxes back to the controller. I am doing this:
foreach (string svf in property.SiteVisibilityFlags)
{
Enums.SiteVisibilityFlags flagTester;
if (Enum.TryParse<Enums.SiteVisibilityFlags>(svf, out flagTester))
{
// add to domainProperty
domainProperty.SiteVisibilityFlags = flagTester; <--Here is where I mean
}
}
Now, I know that normally, with a flagged enum you do something like:
domainProperty.SiteVisibilityFlags = Enums.SiteVisibilityFlags.Corporate | Enums.SiteVisibilityFlags.Properties;
So, if/how can I accomplish the '|'... in this methodology?
You could use the [FlagAttribute] explained here.
From there you can simply use the bit-or (|) operator as follows
domainProperty.SiteVisibilityFlags |= flagTester;
Also there is a really good explanation with examples on stackoverflow about attribute
figured it out. Any enum that has [Flags] as an attribute can be solved by summing up the values of all checked items like this:
// Site Visibility Flags
int SiteVisibilityTotalValue = 0;
foreach (string svf in property.SiteVisibilityFlags)
{
Enums.SiteVisibilityFlags flagTester;
if (Enum.TryParse<Enums.SiteVisibilityFlags>(svf, out flagTester))
{
// sum up values to get total to them convert to enum
SiteVisibilityTotalValue += (int)flagTester;
}
}
// convert total to Enum
domainProperty.SiteVisibilityFlags = (Enums.SiteVisibilityFlags)SiteVisibilityTotalValue;

Thread safety question about one container

Let's talk about theory a bit. We have one container, let's call it TMyObj that looks like this:
struct TMyObj{
bool bUpdated;
bool bUnderUpdate;
}
Let a class named TMyClass have an array of the container above + 3 helpful functions. One for getting an object to be updated. One for adding update info to a certain object and one for getting an updated object. It's also called in this order. Here's the class
class TMyClass{
TmyObj entries[];
TMyObj GetObjToUpdate;
{
//Enter critical section
for(int i=0; i<Length(entries); i++)
if(!entries[i].bUnderUpdate)
{
entries[i].bUnderUpdate=true;
return entries[i];
}
//Leave critical section
}
//the parameter here is always contained in the Entries array above
void AddUpdateInfo(TMyObj obj)
{
//Do something...
//Enter critical section
if(updateInfoOver) obj.bUpdated=true; //left bUnderUpdate as true so it doesn't bother us
//Leave critical section
}
TmyObj GetUpdatedObj
{
//<-------- here
for(int i=0; i<Length(entrues); i++)
if(entries[i].bUpdated) then return entries[i];
//<-------- and here?
}
}
Now imagine 5+ threads using the first two and another one for using the last function(getUpdadtedObj) on one instance of the class above.
Question: Will it be thread-safe if there's no critical section in the last function?
Given your sample code, it appears that it would be thread-safe for a read. This is assuming entries[] is a fixed size. If you are simply iterating over a fixed collection, there is no reason that the size of the collection should be modified, therefore making a thread-safe read ok.
The only thing I could see is that the result might be out of date. The problem comes from a call to GetUpdatedObj -- Thread A might not see an update to entries[0] during the life-cycle of
for(int i=0; i<Length(entrues); i++)
if(entries[i].bUpdated) then return entries[i];
if Thread B comes along and updates entries[0] while i > 0 -- it all depends if that is considered acceptable or not.

Resources