Combining samples quickly in tone.js (or another framework) - tone.js

I need to programatically combine a bunch of music tracks in a sequence, one after the other, with some overlap between them, based on some rules.
I was looking at tone.js today which is great, and I've just about managed to make work (with players feeding into a recorder), but I realised right at the end that you have to wait for the whole sequence to play out in real time before it can be saved.
I don't want to have to wait an hour to get the file, I need it within a minute maximum. Is this possible with tone.js and if not is there any other programmatic way to do this?

You should be able to use OfflineContext rendering for this. Basically what you have to do is calling Tone.Offline and then get the audioBuffer of the result and save it to a file. You don't need a Recorder node. So something like this:
const audioBufferNode = await Tone.Offline(({ transport }) => {
// Do all your player scheduling along the transport here.
transport.start(0.5); // Start the transport to trigger all scheduled players
}, 4 /* The length of the resulting file */);
To download the audio buffer as a file requires you to read the raw data and write it into a format you wish to download.
A audio buffer to wave writer can be found here:
https://www.russellgood.com/how-to-convert-audiobuffer-to-audio-file

Related

With Akka Streams, how do I know when a source has completed?

I have an Alpakka Elasticsearch Sink that I'm keeping around between requests. When I get a request, I create a Source from an HTTP request and turn that into a Source of Elasticsearch WriteMessages, then run that with mySource.runWith(theElasticseachSink).
How do I get notified when the source has completed? Nothing useful seems to be materialized.
Will completion of the source be passed to the sink, meaning I have to create a new one each time?
If yes to the above, would decoupling them somehow with Flow.fromSourceAndSink help?
My goal is to know when the HTTP download has completed (including the vias it goes through) and to be able to reuse the sink.
you can pass around the single parts of a flow as you wish, you can even pass around the whole executabe graph (those are immutables). The run() call materializes the flow, but does not change your graph or its parts.
1)
Since you want to know when the HttpDownload passed the flow , why not use the full graphs Future[Done] ? Assuming your call to elasticsearch is asynchronous, this should be equal since your sink just fires the call and does not wait.
You could also use Source.queue (https://doc.akka.io/docs/akka/2.5/stream/operators/Source/queue.html) and just add your messages to the queue, which then reuses the defined graph so you can add new messages when proocessing is needed. This one also materializes a SourceQueueWithComplete allowing you to stop the stream.
Apart from this, reuse the sink wherever needed without needing to wait for another stream using it.
2) As described above: no, you do not need to instantiate a sink multiple times.
Best Regards,
Andi
It turns out that Alpakka's Elasticsearch library also supports flow shapes, so I can have my source go via that and run it via any sink that materializes a future. Sink.foreach works fine here for testing purposes, for example, as in https://github.com/danellis/akka-es-test.
Flow fromFunction { product: Product =>
WriteMessage.createUpsertMessage(product.id, product.attributes)
} via ElasticsearchFlow.create[Map[String, String]](index, "_doc")
to define es.flow and then
val graph = response.entity.withSizeLimit(MaxFeedSize).dataBytes
.via(scanner)
.via(CsvToMap.toMap(Utf8))
.map(attrs => Product(attrs("id").decodeString(Utf8), attrs.mapValues(_.decodeString(Utf8))))
.via(es.flow)
val futureDone = graph.runWith(Sink.foreach(println))
futureDone onComplete {
case Success(_) => println("Done")
case Failure(e) => println(e)
}

Live-Streaming webcam webm stream (using getUserMedia) by recording chunks with MediaRecorder over WEB API with WebSockets and MediaSource

I'm trying to broadcast a webcam's video to other clients in real-time, but I encounter some problems when viewer's start watching in the middle.
For this purpose, I get the webcam's stream using getUserMedia (and all its siblings).
Then, on a button click, I start recording the stream and send each segment/chunk/whatever you call it to the broadcaster's websocket's backend:
var mediaRecorder = new MediaRecorder(stream);
mediaRecorder.start(1000);
mediaRecorder.ondataavailable = function (event) {
uploadVideoSegment(event); //wrap with a blob and call socket.send(...)
}
On the server side (Web API, using Microsoft.Web.WebSockets),
I get the byte[] as is perfectly.
Then I send the byte[] to the Viewers which are currently connected to the Broadcaster, read it on the onmessage event of the socket using a FileReader and append the Uint8Array to the sourceBuffer of the MediaSource which is the src of the HTML5 video element.
When the Viewers get the byte[] from the beginning, specifically, the first 126 bytes which start with the EBMLHeader (0x1A45DFA3) and end with the Cluster's beginning (0x1F43B675), and then the whole bulk of the media - it's being played fine.
The problem occurs when a new viewer joins in the middle and fetches the second chunk and later.
I've been trying to research and get the hands a little dirty with some kinds of ways. I understand that the header is essential (http://www.slideshare.net/mganeko/media-recorder-and-webm), that there's some stuff concerning keyframes and all this stuff but I got confused very quickly.
So far, I tried to write my own simple webm parser in c# (from a reference of node.js project in github - https://github.com/mganeko/wmls). Thus I splitted the header from the first chunk, cached it and tried to send it with each chunk later. Of course it didn't work.
I think that maybe the MediaRecorder is splitting the cluster in the middle as the ondataavailable event is fired (that's because I've noticed the the start fo the second chunk doesn't begin with the Cluster's header).
At this point I got stuck without knowing how to use the parser to get it work.
Then I read about using ffmpeg to convert the webm stream s.t each frame is also a keyframe - Encoding FFMPEG to MPEG-DASH – or WebM with Keyframe Clusters – for MediaSource API (in Chris Nolet's answer).
I tried to use FFMpegConverter (for .Net) using:
var conv = new FFMpegConverter();
var outputStream = new MemoryStream();
var liveMedia = conv.ConvertLiveMedia("webm", outputStream, "webm", new ConvertSettings { VideoCodec = "vp8", CustomOutputArgs = "-g 1" });
liveMedia.Start();
liveMedia.Write(vs.RawByteArr, 0, vs.RawByteArr.Length); //vs.RawByteArr is the byte[] I got from the MediaRecorder
liveMedia.Stop();
byte[] buf = new byte[outputStream.Length];
outputStream.Position = 0;
outputStream.Read(buf, 0, (int)outputStream.Length);
I'm not familiar with FFMPEG so probably I'm not getting in the parameters correctly although in the answer that's what I saw but they kind of wrote it very shortly there.
Of course I encountered here plenty of problems:
When using websockets, the running of the FFMpegConverter simply forced closing the websockets channel. (I'll glad if someone could explain why).
I didn't give up, I wrote everything without websockets using HttpGet (for fetching the segment from the server) and HttpPost (with multipart blobs and all the after-party for posting the recorded chunks) methods and tried to use the FFMpegConverter as mentioned above.
For the first segment it worked BUT outputed a byte[] with half length of the original one (I'll be glad if someone could explain that as well), and for the other chunks it threw an exception (every time not just once) saying the pipe has been ended.
I'm getting lost.
Please help me, anybody. The main 4 questions are:
How can I get played the chunks that follow the first chunk of the MediaRecorder?
(Meanwhile, I just get the sourcebuffer close/end events fired and the sourceBuffer is detached from its parent MediaSource object (causing an exception like the "sourceBuffer has been removed from its parent") due to the fact that the byte[] passed to it is not good - Maybe i'm not using the webm parser I wrote in the correct way to detect important parts in the second chunk (which by the way doesn't start with a cluster - which why I wrote that it seems that the MediaRecorder is cutting the cluster in the middle))
Why does the FFMpeg cause the WebSockets to be closed?
Am I using the FFMpegConverter.ConvertLiveMedia with the correct parameters in order to get a new webm segment with all the information needed in it to get it as a standalone chunk, without being dependent on the former chunks (as Chris Nolet said in his answer in the SO link above)?
Why does the FFMpegConverter throw "the pipe ended" exception?
Any help will be extremely highly appreciated.

QSound play sounds one after another

I'm working on a program that receive an event every 200ms, and I want to play a sound depending on the last event received when the last sound playing is finished.
Unfortunately the isFinished() function doesn't work on windows for unlooped sounds.
So I'm trying to find a way to wait until a sound has finished playing before playing an other one from the last event (like a LIFO with only one element).
I manage to do that :
QSound *sound[5];
int select, lastSelect;
if(sound[lastSelect]->loopsRemaining() >=1)
sound[lastSelect]->stop();
else {
sound[select]->setLoops(2);
sound[select]->play();
lastSelect = select;
}
But it's queuing the sounds and that's not what I want.
Otherwise I can do it by setting the number of loop to 2 but it plays the sound twice before playing the next one.
Do you have any idea how to do it ?

How to insert a batch of records into Redis

In a twitter-like application, one of the things they do is when someone posts a tweet, they iterate over all followers and create a copy of the tweet in their timeline. I need something similar. What is the best way to insert a tweet ID into say 10/100/1000 followers assuming I have a list of follower IDs.
I am doing it within Azure WebJobs using Azure Redis. Each webjob is automatically created for every tweet received in the queue. So I may have around 16 simultaneous jobs running at the same time where each one goes through followers and inserts tweets.I'm thinking if 99% of inserts happen, they should not stop because one or a few have failed. I need to continue but log it.
Question: Should I do CreateBatch like below? If I need to retrieve latest tweets first in reverse chronological order is below fine? performant?
var tasks = new List<Task>();
var batch = _cache.CreateBatch();
//loop start
tasks.Add(batch.ListRightPushAsync("follower_id", "tweet_id"));
//loop end
batch.Execute();
await Task.WhenAll(tasks.ToArray());
a) But how do I catch if something fails? try catch?
b) how do I check in a batch for a total # in each list and pop one out if it reaches a certain #? I want to do a LeftPop if the list is > 800. Not sure how to do it all inside the batch.
Please point me to a sample or let me have a snippet here. Struggling to find a good way. Thank you so much.
UPDATE
Does this look right based on #marc's comments?
var tasks = new List<Task>();
followers.ForEach(f =>
{
var key = f.FollowerId;
var task = _cache.ListRightPushAsync(key, value);
task.ContinueWith(t =>
{
if (t.Result > 800) _cache.ListLeftPopAsync(key).Wait();
});
tasks.Add(task);
});
Task.WaitAll(tasks.ToArray());
CreateBatch probably doesn't do what you think it does. What it does is defer a set of operations and ensure they get sent contiguously relative to a single connection - there are some occasions this is useful, but not all that common - I'd probably just send them individually if it was me. There is also CreateTransaction (MULTI/EXEC), but I don't think that would be a good choice here.
That depends on whether you care about the data you're popping. If not: I'd send a LTRIM, [L|R]PUSH pair - to trim the list to (max-1) before adding. Another option would be Lua, but it seems overkill. If you care about the old data, you'll need to do a range query too.

MATLAB event and infinite sleeping or checking loop

I need to perform data analysis on files in a directory as they come in.
I'd like to know, if it is better,
to implement an event listener on the directory, and start the analysis process when activated. Then having the program go into sleep forever: while(true), sleep(1e10), end
or to have a loop polling for changes and reacting.
I personally prefer the listeners way, as one is able to start the analysis twice on two new files coming in NEARLY the same time but resulting in two events. While the other solution might just handle the first one and after that finds the second new data.
Additional idea for option 1: Hiding the matlab GUI by calling frames=java.awt.Frame.getFrames and setting frames(index).setVisible(0) on the index matching the com.mathworks.mde.desk.MLMainFrame-frame. (This idea is taken from Yair Altman)
Are there other ways to realize such things?
In this case, (if you are using Windows), the best way is to use the power of .NET.
fileObj = System.IO.FileSystemWatcher('c:\work\temp');
fileObj.Filter = '*.txt';
fileObj.EnableRaisingEvents = true;
addlistener(fileObj,'Changed',#eventhandlerChanged);
There are different event types, you can use the same callback for them, or different ones:
addlistener(fileObj, 'Changed', #eventhandlerChanged );
addlistener(fileObj, 'Deleted', #eventhandlerChanged );
addlistener(fileObj, 'Created', #eventhandlerChanged );
addlistener(fileObj, 'Renamed', #eventhandlerChanged );
Where eventhandlerChanged is your callback function.
function eventhandlerChanged(source,arg)
disp('TXT file changed')
end
There is no need to use sleep or polling. If your program is UI based, then there is nothing else to do, when the user closes the figure, the program has ended. The event callbacks are executed exactly like button clicks. If your program is script-like, you can use an infinite loop.
More info in here: http://www.mathworks.com/help/matlab/matlab_external/working-with-net-events-in-matlab.html

Resources