I got a code like this
void ContactModel::archive(ArchiveStream &stream) {
stream & kn;
stream & ks;
stream & fric;
.
.
.
}
As I understand, stream is an object of ArchiveStream class. But if it is the case, how they use stream to define a bunch of new parameters kn, ks, fric?
I think I misunderstood it. So hope you could explain this code. I am really appreciate
Related
Background
I'm trying to observe one Int stream (actually I'm not, but to make the argument easier) and do something with it while combining that stream to multiple other streams, say a String stream and a Double stream like the following:
// RxSwift
let intStream = BehaviorSubject<Int>(value: 0) // subscribe to this later on
let sharedStream = intStream.share()
let mappedStream = sharedStream.map { ... }.share()
let combinedStream1 = Observable.combineLatest(sharedStream, stringStream).map { ... }
let combinedStream2 = Observable.combineLatest(sharedStream, doubleStream).map { ... }
The above code is just to demonstrate what I'm trying to do. The code above is part of view model code (the VM part of MVVM), and only the first map (for mappedStream) runs, while the others are not called.
Question
What is wrong with the above approach, and how do I achieve what I'm trying to do?
Also, is there a better way to achieve the same effect?
Updates
I confirmed that setting the replay count to 1 makes things work. But why?
The code above all goes in the initialization phase of the view model, and the subscription happens afterwards.
Okay, I have an answer but it's a bit complex... One problem is that you are using a Subject in the view model, but I'll ignore that for now. The real problem comes from the fact that you are using hot observables inappropriately (share() make a stream hot) and so events are getting dropped.
It might help if you put a bunch of .debug()s on this code so you can follow along. But here's the essence...
When you subscribe to mappedStream, it subscribes to the share which in turn subscribes to the sharedStream, which subscribes to the intStream. The intStream then emits the 0, and that 0 goes down the chain and shows up in the observer.
Then you subscribe to the combinedStream1, which subscribes to the sharedStream's share(). Since this share has already been subscribed to, the subscriptions stop there, and since the share has already output it's next event, the combinedStream1 doesn't get the .next(0) event.
Same for the combinedStream2.
Get rid of all the share()s and everything will work:
let intStream = BehaviorSubject<Int>(value: 0) // subscribe to this later on
let mappedStream = intStream.map { $0 }
let combinedStream1 = Observable.combineLatest(intStream, stringStream).map { $0 }
let combinedStream2 = Observable.combineLatest(intStream, doubleStream).map { $0 }
This way, each subscriber of intStream gets the 0 value.
The only time you want to share is if you need to share side effects. There aren’t any side effects in this code, so there’s no need to share.
Reading article about java 8 stream, and found
Java Streams are consumable, so there is no way to create a reference
to stream for future usage. Since the data is on-demand, it’s not
possible to reuse the same stream multiple times.
at the same time at the same article
//sequential stream
Stream<Integer> sequentialStream = myList.stream();
//parallel stream
Stream<Integer> parallelStream = myList.parallelStream();
What does it mean of "there is no way to create a reference to stream for future usage" ? aren't sequentialStream and parallelStream references to streams
also what does it mean of "it’s not possible to reuse the same stream multiple times" ?
What it means is that every time you need to operate on a stream, you must make a new one.
So you cannot, for example, have something like:
Class Person {
private Stream<String> phoneNumbers;
Stream<String> getPhoneNumbers() {
return phoneNumbers;
}
}
and just reuse that one stream whenever you like. Instead, you must have something like
Class Person {
private List<String> phoneNumbers;
Stream<String> getPhoneNumbers() {
return phoneNumbers.stream(); // make a NEW stream over the same data
}
}
The code snipped you included does just that. It makes 2 different streams over the same data
I have 3 interfaces
public interface IGhOrg {
int getId();
String getLogin();
String getName();
String getLocation();
Stream<IGhRepo> getRepos();
}
public interface IGhRepo {
int getId();
int getSize();
int getWatchersCount();
String getLanguage();
Stream<IGhUser> getContributors();
}
public interface IGhUser {
int getId();
String getLogin();
String getName();
String getCompany();
Stream<IGhOrg> getOrgs();
}
and I need to implement Optional<IGhRepo> highestContributors(Stream<IGhOrg> organizations)
this method returns a IGhRepo with most Contributors(getContributors())
I tried this
Optional<IGhRepo> highestContributors(Stream<IGhOrg> organizations){
return organizations
.flatMap(IGhOrg::getRepos)
.max((repo1,repo2)-> (int)repo1.getContributors().count() - (int)repo2.getContributors().count() );
}
but it gives me the
java.lang.IllegalStateException: stream has already been operated upon or closed
I understand that count() is a terminal operation in Stream but I can't solve this problem, please help!
thanks
Is possible to know the size of a stream without using a terminal operation
No it's not, because streams can be infinite or generate output on demand. It's not necessary that they are backed by collections.
but it gives me the
java.lang.IllegalStateException: stream has already been operated upon or closed
That's becase you are returning the same stream instance on each method invocation. You should return a new Stream instead.
I understand that count() is a terminal operation in Stream but I can't solve this problem, please help!
IMHO you are misusing the streams here. Performance and simplicity wise it's much better that you return some Collection<XXX> instead of Stream<XXX>
NO.
This is not possible to know the size of a stream in java.
As mentioned in java 8 stream docs
No storage. A stream is not a data structure that stores elements;
instead, it conveys elements from a source such as a data structure,
an array, a generator function, or an I/O channel, through a pipeline
of computational operations.
You don't specify this, but it looks like some or possibly all of the interface methods that return Stream<...> values don't return a fresh stream each time they are called.
This seems problematic to me from an API point of view, as it means each of these streams, and a fair chunk of the object's functionality can be used at most once.
You may be able to solve the particular problem you are having by ensuring that the stream from each object is used only once in the method, something like this:
Optional<IGhRepo> highestContributors(Stream<IGhOrg> organizations) {
return organizations
.flatMap(IGhOrg::getRepos)
.distinct()
.map(repo -> new AbstractMap.SimpleEntry<>(repo, repo.getContributors().count()))
.max(Map.Entry.comparingByValue())
.map(Map.Entry::getKey);
}
Unfortunately it looks like you will now be stuck if you want to (for example) print a list of the contributors, as the stream returned from getContributors() for the returned IGhRepo has already been consumed.
You might want to consider having your implementation objects return a fresh stream each time a stream returning method is called.
You could keep a counter that is incremented per "iteration" using peek. In the example below the counter is incremented before every item is processed with doSomeLogic
final var counter = new AtomicInteger();
getStream().peek(item -> counter.incrementAndGet()).forEach(this::doSomeLogic);
I have research and read documents by they are not very understandable.
What i am trying to achieve is the following functionality:
I am using Spring Reactor project and using the eventBus. My event bus is throwing event to module A.
Module A should receive the event and insert into Hot Stream that will hold unique values. Every 250 Milisecons the stream should pull all value and make calulcation on them.. and so on.
For example:
The eventBus is throwing event with number: 1,2,3,2,3,2
The Stream should get and hold unique values -> 1,2,3
After 250 miliseconds the stream should print the number and empty values
Anyone has an idea how to start? I tried the examples but nothing really works and i guess i don't understand something. Anyone has an example?
Tnx
EDIT:
When trying to do the next i always get exception:
Stream<List<Integer>> s = Streams.wrap(p).buffer(1, TimeUnit.SECONDS);
s.consume(i -> System.out.println(Thread.currentThread() + " data=" + i));
for (int i = 0; i < 10000; i++) {
p.onNext(i);
}
The exception:
java.lang.IllegalStateException: The environment has not been initialized yet
at reactor.Environment.get(Environment.java:156) ~[reactor-core-2.0.7.RELEASE.jar:?]
at reactor.Environment.timer(Environment.java:184) ~[reactor-core-2.0.7.RELEASE.jar:?]
at reactor.rx.Stream.getTimer(Stream.java:3052) ~[reactor-stream-2.0.7.RELEASE.jar:?]
at reactor.rx.Stream.buffer(Stream.java:2246) ~[reactor-stream-2.0.7.RELEASE.jar:?]
at com.ta.ng.server.controllers.user.UserController.getUsersByOrgId(UserController.java:70) ~[classes/:?]
As you can see i cannot proceed trying without passing this issue.
BY THE WAY: This is happeing only when i use buffer(1, TimeUnit.SECONDS) If i use buffer(50) for example it works.. Although this is not the final solution its a start.
Well after reading doc again i missed this:
static {
Environment.initialize();
}
This solved the problem. Tnx
I am using Java 8 Streams to create stream from a csv file.
I am using BufferedReader.lines(), I read the docs for BufferedReader.lines():
After execution of the terminal stream operation there are no guarantees that the reader will be at a specific position from which to read the next character or line.
public class Streamy {
public static void main(String args[]) {
Reader reader = null;
BufferedReader breader = null;
try {
reader = new FileReader("refined.csv");
} catch (FileNotFoundException e) {
e.printStackTrace();
}
breader = new BufferedReader(reader);
long l1 = breader.lines().count();
System.out.println("Line Count " + l1); // this works correctly
long l2 = breader.lines().count();
System.out.println("Line Count " + l2); // this gives 0
}
}
It looks like after reading the file for first time, reader does not get to beginning of the file. What is the way around for this problem
It looks like after reading the file for first time, reader does not get to beginning of the file.
No - and I don't know why you would expect it to given the documentation you quoted. Basically, the lines() method doesn't "rewind" the reader before starting, and may not even be able to. (Imagine the BufferedReader wraps an InputStreamReader which wraps a network connection's InputStream - once you've read the data, it's gone.)
What is the way around for this problem
Two options:
Reopen the file and read it from scratch
Save the result of lines() to a List<String>, so that you're then not reading from the file at all the second time. For example:
List<String> lines = breader.lines().collect(Collectors.toList());
As an aside, I'd strongly recommend using Files.newBufferedReader instead of FileReader - the latter always uses the platform default encoding, which isn't generally a good idea.
And for that matter, to load all the lines into a list, you can just use Files.readAllLines... or Files.lines if you want the lines as a stream rather than a list. (Note the caveats in the comments, however.)
Probably the cited fragment from JavaDoc needs to be clarified. Usually you would expect that after reading the whole file reader will point to the end of the file. But using streams it depends on whether short-circuit terminal operation is used and whether the stream is parallel. For example, if you use
String magicLine = breader.lines()
.filter(str -> str.startsWith("magic"))
.findAny()
.orElse(null);
Your reader will likely to stop after the first found line (because no need to read further) or read the whole input file if such line is not found. If you make the same operation in parallel stream, then the resulting position will be unpredictable, because input will be split to some implementation-dependent chunks where the search will be performed. That's why it's written this way in the documentation.
As for workaround ways, please read the #JonSkeet answer. And consider closing your streams via try-with-resource construct.
If there are no guarantees that the reader will be at a specific line, why wouldn't you create two readers?
reader1=new FileReader("refined.csv");
reader2=new FileReader("refined.csv");