How can I pull a value out of a stream? - frp

Functional reactive programming implementations seem to have the observer as being passively dependent upon streams providing them with values.
Is it possible to request a new value from down stream?
For example, if I have a stream that serves co-ordinates of the apple in the game Snake, if the apple has been eaten then how can I ask for a new value?
The source stream doesn't know that a new apple is required.
This has an interesting implementation that uses recursion.
function apple() {
var applePos = randomPos()
return
position
.filter(p => p.equals(applePos))
.take(1)
.flatMapLatest(apple)
.toProperty(applePos)
}
Is there a more straight forward way of doing this?
Perhaps I could have a stream that knows when a value has been taken and immediately generates a new one and holds it in a buffer.

The source stream doesn't know that a new apple is required.
I think that's the problem. In proper FRP, the apple function should have a parameter with a stream of apple eaten events, so that it can know when to produce a new apple.

Related

Forcing map() over Java 8 Stream ()

I'm confused on this situation:
I've a Producer which produces an undetermined number of items from an underlining iterator, possibly a large number of them.
Each item must be mapped to a different interface (eg, wrapper, JavaBean from JSON structure).
So, I'm thinking that it would be good for Producer to return a stream, it's easier to write code that convert Iterator to Stream (using Spliterators and StreamSupport.stream()), then apply Stream.map() and return the final stream.
The problem is I have an invoker that does nothing with the resulting stream, eg, a unit test, yet I still want the mapping code to be invoked for every item. At the moment I'm simply calling Stream.count() from the invoker to force that.
Questions are:
Am I doing it wrong? Should I use different interfaces? Note that I think implementing next()/hasNext() for Iterator is cumbersome, mainly because it forces you to create a new class (even if it can be anonymous) and keep a pointer and check it. Same for collection views, returning a collection that is created and not a dynamic view over the underlining iterator is out of question (the input data set might be very large). The only alternative I like so far is a Java implementation of yield(). Neither do I want the stream to be consumed inside Producer (ie, forEach()), since some other invoker might want it to perform some real operation.
Is there a better best practice to force the stream processing?

parralel stream syntax prior to java 8 release

Prior to the Java 8 official release, when it was still in development, am I correct in thinking the syntax of getting streams and parallel streams was slightly different. Now we have the option of either saying:
stream().parallel() or parallelStream()
I remember reading tutorials before its release when there was a subtle difference here - can anyone remind of of what it was as it has been bugging me!
Current implementation has no difference: .stream() creates a pipeline with parallel field set to false, then .parallel() just sets this field to true and returns the same object. When using .parallelStream(), it creates the pipeline with parallel field set to true in constructor. So both versions are the same. Any consequent calls to .parallel() or .sequential() just do the same: change the stream mode flag to true or false and return the same object.
The early implementation of Stream API was different. Here's the source code of AbstractPipeline (parent for all Stream, IntStream, LongStream and DoubleStream implementations) in lambda-dev just before the logic was changed. Setting the mode to parallel() right after the stream is created from the spliterator was relatively cheap: it just extracts spliterator from the original stream (depth == 0 branch in spliteratorSupplier()), then creates a new stream on the top of this spliterator discarding the original stream (those times there were no close()/onClose(), so it was unnecessary to delegate close handlers).
Nevertheless if your stream source included intermediate steps (for example, consider Collections.nCopies implementation which includes map step), the things were worse: using .stream().parallel() would create a new spliterator with poor-man splitting strategy (which includes buffering). So for such collection using .parallelStream() was actually better as it used internally .parallel() before the intermediate operation. Currently even for nCopies() you can use both .stream().parallel() and .parallelStream() interchangeably.
Going even more backwards, you may notice that .parallelStream() was called simply .parallel() initially. It was renamed in this changeset.

libtorrent new piece alerts

I am developing an application that will stream multimedia files over torrents.
The backend needs to serve new pieces to the frontend as they arrive.
I need a mechanism to get notified when new pieces have arrived and been verified. From what I can tell, I could do this using block_finished_alerts. I would keep track of which blocks have arrived for a given piece, and read the piece when all blocks have arrived.
This solution seems kind of roundabout and I was wondering if there was a better way.
What you're asking for is called piece_finished_alert. It's posted every time a new piece completes downloading and passes the hash-check. To read a piece from disk, you may use torrent_handle::read_piece() (and get the result in read_piece_alert).
However, if you want to stream media, you probably want to use torrent_handle::set_piece_deadline() and set the flag to send read_piece_alerts as pieces come in. This will invoke the built-in streaming feature of libtorrent.

write only stream

I'm using joliver/EventStore library and trying to find a way of how to get a stream not reading any events from it.
The reason is that I want just to write some events into that store for specific stream without loading all 10k messages from it.
The way you're expected to use the store is that you always do a GetById first. Even if you new up an Aggregate and Save it, you'll see in the CommonDomain EventStoreRepository that it will first correlate it with the existing data.
The key reason why a read is needed first is that the infrastructure needs to work out how many events have gone before to compute the new commit sequence number.
Regarding your citing of your example threshold that makes you want to optimize this away... If you're really going to have that level of events, you'll already be into snapshotting territory as you'll need to have an appropriately efficient way of doing things other than blind write too.
Even if you're not intending to lean on snapshotting, half the benefit of using EventStore is that the facility is buitl in for when you need it.

NSSound-like framework that works, but doesn't require dealing with a steep learning curve

I've pretty much finished work on a white noise feature for one of my applications using NSSound to play a loop of 10 second AAC-encoded pre-recorded white noise.
[sound setLoops: YES]
should be all that's required, right?
It works like a charm but I've noticed that there is an audible pause between the sound file finishing and restarting.. a sort of "plop" sound. This isn't present when looping the original sound files and after an hour or so of trying to figure this out, I've come to the conclusion that NSSound sucks and that the audible pause is an artefact of the synchronisation of the private background thread playing the sound. It seems to be dependent on the main run loop somehow and this causes the audible gap between the end and restarting of the sound.
I know very little about sound stuff and this is a very minor feature, so I don't want to get into the depths of CoreAudio just to play a looping 10s sound fragment.. so I went chasing after a nice alternative, but nothing seems to quite fit:
Core Audio: total overkill, but at least a standard framework
AudioQueue: complicated, with C++ sample code!?
MusicKit/ SndKit: also huge learning curve, based on lots of open source stuff, etc.
I saw that AVFoundation on iOS 4 would be a nice way to play sounds, but that's only scheduled for Mac OS X 10.7..
Is there any easy-to-use way of reliably looping sound on Mac OS X 10.5+?
Is there any sample code for AudioQueue or Core Audio that takes the pain out of using them from an Objective-C application?
Any help would be very much appreciated..
Best regards,
Frank
Use QTKit. Create a QTMovie for the sound, set it to loop, and leave it playing.
Just for the sake of the archives.
QTKit also suffers from a gap between the end of one play through and start of the next one. It seems to be linked with re-initializing the data (perhaps re-reading it from disk?) in some way. It's a lot more noticeable when using the much smaller but highly compressed m4a format than when playing uncompressed aiff files but it's still there even so.
The solution I've found is to use Audio Queue Services:
http://developer.apple.com/mac/library/documentation/MusicAudio/Conceptual/AudioQueueProgrammingGuide/AQPlayback/PlayingAudio.html#//apple_ref/doc/uid/TP40005343-CH3-SW1
and
http://developer.apple.com/mac/library/samplecode/AudioQueueTools/Listings/aqplay_cpp.html#//apple_ref/doc/uid/DTS10004380-aqplay_cpp-DontLinkElementID_4
The Audio Queue calls a callback function which prepares and enqueues the next buffer, so when you reach the end of the current file you need to start again from the beginning. This gives completely gapless playback.
There's two gotchas in the sample code in the documentation.
The first is an actual bug (I'll contact DTS about this so they can correct it). Before allocating and priming the audio buffers, the custom structure must switch on playback otherwise the audio buffer never get primed and nothing is played:
aqData.mIsRunning = 1;
The second gotcha is that the code doesn't run in Cocoa but as a standalone tool, so the code connects the audio queue to a new run loop and actually implements the run loop itself as the last step of the program.
Instead of passing CFRunLoopGetCurrent(), just pass NULL which causes the AudioQueue to run in its own run loop.
result = AudioQueueNewOutput ( // 1
&aqData.mDataFormat, // 2
HandleOutputBuffer, // 3
&aqData, // 4
NULL, //CFRunLoopGetCurrent (), // 5
kCFRunLoopCommonModes, // 6
0, // 7
&aqData.mQueue // 8
);
I hope this can save the poor wretches trying to do this same thing in the future a bit of time :-)
Sadly, there is a lot of pain when developing audio applications on OS X. The learning curve is very steep because the documentation is fairly sparse.
If you don't mind Objective-C++ I've written a framework for this kind of thing: SFBAudioEngine. If you wanted to play a sound with my code here is how you could do it:
DSPAudioPlayer *player = new DSPAudioPlayer();
player->Enqueue((CFURLRef)audioURL);
player->Play();
Looping is also possible.

Resources