I am trying to play audio file whenever my chatbot gives a response.I have created an API where my audio file is getting saved and calling it in an ajax call on each bot response.its working fine when single bot response is coming. but problem arise with multiple response, audio gets overlapped means 1st audio is not finished we get the second response and it is also getting played giving mix of both audios.I somehow want to separate this audio and wanted to play it sequentially one after another.
React code :
export default class App extends React.Component {
state = {
audio : new Audio
}
if (replyType.username === "bot" )
{
axios.get("https://alpha.com/call_tts/?message="+replyType.message.text)
.then(res => {
const posts = res;
console.log("ajax response success")
this.setState({
audio : new Audio("https://alpha.com/media/final_x.wav")
});
this.state.audio.play()
});
}
}
}
It doesn't describe the problem statement completely but looking at the code, the behaviour is expected.
You have to add the code to que the audio instead of playing it immediately.
So, create a que and store the loaded audio in que. Instead of playing it immediately, check if any audio is already playing, if so, wait for it to finish (need to add listeners). Once that finishes, pop it from que and play the next qued item.
More code and details on the problem statement will help better.
Related
This question is about running a non-blocking, high-performance activity in nativescript that is needed for the simple task of reading and saving raw audio from the microphone by directly accessing the hardware through the native Android API. I believe I have brought the nativescript framework to the edge of its capabilities, and I need experts' help.
I'm building a WAV audio recorder in Nativescript Android. Native implementation is described here (relevant code below).
In short, this can be done by reading audio steam from an android.media.AudioRecord buffer, and then writing the buffer to a file in a separate thread, as described:
Native Android implementation
startRecording() is triggered by a button press, and starts a new Thread that runs writeAudioDataToFile():
private void startRecording() {
// ... init Recorder
recorder.startRecording();
isRecording = true;
recordingThread = new Thread(new Runnable() {
#Override
public void run() {
writeAudioDataToFile();
}
}, "AudioRecorder Thread");
recordingThread.start();
}
Recording is stopped by setting isRecording to false (stopRecording() is triggered by a button press):
private void stopRecording() {
isRecording = false;
recorder.stop();
recorder.release();
recordingThread = null;
}
Reading and saving buffer is stopped if isRecording = false:
private void writeAudioDataToFile() {
// ... init file and buffer
ByteArrayOutputStream recData = new ByteArrayOutputStream();
DataOutputStream dos = new DataOutputStream(recData);
int read = 0;
while(isRecording) {
read = recorder.read(data, 0, bufferSize);
for(int i = 0; i < bufferReadResult; i++) {
dos.writeShort(buffer[i]);
}
}
}
My Nativescript javascript implementation:
I wrote a nativescript typescript code that does the same as the native Android code above. The problem #1 I faced was that I can't run while(isRecording) because the javascript thread would be busy running inside the while loop, and would never be able to catch button clicks to run stopRecording().
I tried to solve problem #1 by using setInterval for asynchronous execution, like this:
startRecording() is triggered by a button press, and sets a time interval of 10ms that executes writeAudioDataToFile():
startRecording() {
this.audioRecord.startRecording();
this.audioBufferSavingTimer = setInterval(() => this.writeAudioDataToFile(), 10);
}
writeAudioDataToFile() callbacks are queued up every 10ms:
writeAudioDataToFile() {
let bufferReadResult = this.audioRecord.read(
this.buffer,
0,
this.minBufferSize / 4
);
for (let i = 0; i < bufferReadResult; i++) {
dos.writeShort(buffer[i]);
}
}
Recording is stopped by clearing the time interval (stopRecording() is triggered by button press):
stopRecording() {
clearInterval(this.audioBufferSavingTimer);
this.audioRecord.stop();
this.audioRecord.release();
}
Problem #2: While this works well, in many cases it makes the UI freeze for 1-10 seconds (for example after clicking a button to stop recording).
I tried to change the time interval that executes writeAudioDataToFile() from 10ms to 0ms and up to 1000ms (while having a very big buffer), but then the UI freezes were longer and, and I experienced loss in the saved data (buffered data that was not saved to the file).
I tried to offload this operation to a separate Thread by using a nativescript worker thread as described here, where startRecording() and stopRecording() are called by messages sent to the thread like this:
global.onmessage = function(msg) {
if (msg.data === 'startRecording') {
startRecording();
} else if (msg.data === 'stopRecording') {
stopRecording();
}
}
This solved the UI problem, but created problem #3: The recorder stop was not executed on time (i.e. recording stops 10 to 50 seconds after the 'stopRecording' msg.data is received by the worker thread). I tried to use different time intervals in the setInterval inside the worker thread (0ms to 1000ms) but that didn't solve the problem and even made stopRecording() be executed with greater delays.
Does anyone have an idea of how to perform such a non-blocking high-performance recording activity in nativescript/javascript?
Is there a better approach to solve problem #1 (javascript asynchronous execution) that I described above?
Thanks
I would keep the complete Java implementation in actual Java, you can do this by creating a java file in your plugin folder:
platforms/android/java, so maybe something like:
platforms/android/java/org/nativescript/AudioRecord.java
In there you can do everything threaded, so you won't be troubled by the UI being blocked. You can call the Java methods directly from NativeScript for starting and stopping the recording. When you build your project, the Java file will automatically be compiled and included.
You can generate typings from your Java class by grabbing classes.jar from the generated .aar file of your plugin ({plugin_name}.aar) and generate type declarations for it: https://docs.nativescript.org/core-concepts/android-runtime/metadata/generating-typescript-declarations
This way you have all the method/class/type information available in your editor.
I'm trying to create a custom CAF receiver with my own HTML5 video element and supporting library. Since I'm using my own <video> element, I'm setting up the PlayerManager instance as follows:
// My custom HTML5 video player
const player = new Player(document.querySelector('#my-video'))
const context = cast.framework.CastReceiverContext.getInstance()
const playerManager = context.getPlayerManager()
playerManager.setMediaElement(document.querySelector('#my-video'))
// Now, I need to override some of the playerManager methods such as play/pause, etc
const overrides = {
getCurrentTimeSec () {
return player.currentTime
},
getPlayerState () {
const PlayerState = cast.framework.messages.PlayerState
if (!player.ready || !player.source) {
return PlayerState.IDLE
}
return player.paused ? PlayerState.PAUSED : PlayerState.PLAYING
},
getDurationSec () {
return player.duration
},
pause: player.pause,
play: player.play,
seek: player.seek,
load: (loadRequestData) => {
return new Promise((resolve, reject) => {
// Parse loadRequestData and load the media accordingly
...
})
}
Object.assign(playerManager, overrides)
context.start();
Since I need custom handling of the incoming 'load' request in cases where the incoming video is protected, I need to set up a custom load handler. My problem seems to be that playerManager.load is never called.
Note that all of this works great for normal unprotected HTML5 videos since the incoming loadRequestData can be directly understood and used by PlayerManager. It is in cases where I need to do some extra processing that things begin to fail.
I have already tried running my business logic as described by Google:
playerManager.setMessageInterceptor(
cast.framework.messages.MessageType.LOAD,
request => {
...
})
The problem in this approach is that even though I'm able to ask my Player library to load the video successfully, the intercept needs to either return null or the modified request.
When I don't return anything, the playerManager thinks that there is no request to load media.
If I do return a 'modified request', then the default media loading logic kicks in which inevitably fails due to the URL being 'malformed'.
In either case, the playerManager instance reports not being ready for receiving playback commands.
What is the right approach for setting up a CAF receiver with a custom <video> element?
I'm looking for some guidance on the correct way to setup a WebSocket connection with RxJS 5. I am connecting to a WebSocket that uses JSON-RPC 2.0. I want to be able to execute a function which sends a request to the WS and returns an Observable of the associated response from the server.
I set up my initial WebSocketSubject like so:
const ws = Rx.Observable.webSocket("<URL>")
From this observable, I have been able to send requests using ws.next(myRequest), and I have been able to see responses coming back through the ws` observable.
I have struggled with creating functions that will filter the ws responses to the correct response and then complete. These seem to complete the source subject, stopping all future ws requests.
My intended output is something like:
function makeRequest(msg) {
// 1. send the message
// 2. return an Observable of the response from the message, and complete
}
I tried the following:
function makeRequest(msg) {
const id = msg.id;
ws.next(msg);
return ws
.filter(f => f.id === id)
.take(1);
}
When I do that however, only the first request will work. Subsequent requests won't work, I believe because I am completing with take(1)?
Any thoughts on the appropriate architecture for this type of situation?
There appears to be either a bug or a deliberate design decision to close the WebSocket on unsubscribe if there are no further subscribers. If you are interested here is the relevant source.
Essentially you need to guarantee that there is always a subscriber otherwise the WebSocket will be closed down. You can do this in two ways.
Route A is the more semantic way, essentially you create a published version of the Observable part of the Subject which you have more fine grained control over.
const ws = Rx.Observable.webSocket("<URL>");
const ws$ = ws.publish();
//When ready to start receiving messages
const totem = ws$.connect();
function makeRequest(msg) {
const { id } = msg;
ws.next(msg);
return ws$.first(f => f.id === id)
}
//When finished
totem.unsubscribe();
Route B is to create a token subscription that simply holds the socket, but depending on the actual life cycle of your application you would do well to attach to some sort of closing event just to make sure it always gets closed down. i.e.
const ws = Rx.Observable.webSocket("<URL>");
const totem = ws.subscribe();
//Later when closing:
totem.unsubscribe();
As you can see both approaches are fairly similar, since they both create a subscription. B's primary disadvantage is that you create an empty subscription which will get pumped all the events only to throw them away. They only advantage of B is that you can refer to the Subject for emission and subscription using the same variable whereas A you must be careful that you are using ws$ for subscription.
If you were really so inclined you could refine Route A using the Subject creation function:
const safeWS = Rx.Subject.create(ws, ws$);
The above would allow you to use the same variable, but you would still be responsible for shutting down ws$ and transitively, the WebSocket, when you are done with it.
I have an XUL Overlay Firefox extension, I need to develop a dummy XUL extension that establishes connection with the original extension and sends a set of parameters (message) to the original extension. In short, I have to trigger my original extension with my dummy extension.
Probably the easiest way to do this is to have the original extension listening for a custom event on the base browser window. The dummy extension can then create and dispatch the event with whatever custom data is desired.
Creating and dispatching the event from the dummy:
function sendDataToMainExtension(data) {
if (typeof window === "undefined") {
//If there is no window defined, get the most recent.
var window=Components.classes["#mozilla.org/appshell/window-mediator;1"]
.getService(Components.interfaces.nsIWindowMediator)
.getMostRecentWindow("navigator:browser");
}
//This assumes that this event is being both sent from
// and received by privileged (main add-on) code.
var event = new CustomEvent('MyExtensionName-From-Dummy', { 'detail': data });
window.dispatchEvent(event);
}
You may need to take the same steps for making sure the data is visible on the receiving end as would be necessary when firing from privileged code to non-privileged code.
Listening for the event in main:
Components.utils.import("resource://gre/modules/Services.jsm");
const Ci = Components.interfaces;
//Listen for the event on all windows as it is unknown on which one
// the event will be sent.
function loadIntoWindow(myWindow) {
myWindow.addEventListener("MyExtensionName-From-Dummy",
receiveMessageFromDummy, false);
}
function unloadFromWindow(myWindow) {
myWindow.removeEventListener("MyExtensionName-From-Dummy",
receiveMessageFromDummy, false);
}
function forEachOpenWindow(fn) {
// Apply a function to all open browser windows
var windows = Services.wm.getEnumerator("navigator:browser");
let windowCount =0;
while (windows.hasMoreElements()) {
windowCount++;
fn(windows.getNext().QueryInterface(Ci.nsIDOMWindow));
}
}
function receiveMessageFromDummy(event) {
var dataFromDummy = event.detail;
//Do whatever was desired with the data.
}
var WindowListener = {
onOpenWindow: function(aWindow)
{
let domWindow = aWindow.QueryInterface(Ci.nsIInterfaceRequestor)
.getInterface(Ci.nsIDOMWindowInternal || Ci.nsIDOMWindow);
function onWindowLoad()
{
domWindow.removeEventListener("load",onWindowLoad);
if (domWindow.document.documentElement.getAttribute("windowtype")
== "navigator:browser") {
loadIntoWindow(domWindow);
}
}
domWindow.addEventListener("load",onWindowLoad);
},
onCloseWindow: function(xulWindow) { }, // Each window has an unload event handler.
onWindowTitleChange: function(xulWindow, newTitle) { }
};
//Listen for the custom event on all current browser windows.
forEachOpenWindow(loadIntoWindow);
//Listen for the custom event on any new browser window.
Services.wm.addListener(WindowListener);
The data sent should be available as event.detail within the receiveMessageFromDummy() function.
The code above provides one way communication. Two way communication is obtained just duplicating the code to communicate in the other direction with a different custom event. In other words, by having the main extension dispatching a different custom event called something like MyExtensionName-From-Main and having the dummy extension listening for that event. The code is exactly the same as above, but with the event name changed and the function called being receiveMessageFromMain().
Alternately, you could use Window.postMessage(). Doing so sends a "message" event for which you can listen. However, doing so leads to complications which are easier to avoid by using a custom event (e.g. you have to account for the fact that any code (i.e. some other random extension) could be using this event for their own purpose).
Note: The code to loop through windows was originally taken from Converting an old overlay-based Firefox extension into a restartless addon which that author re-wrote as the initial part of How to convert an overlay extension to restartless on MDN. It has been modified multiple times from that code. It may have even earlier versions from other sources.
New to Meteor, and I love it so far. I use vzaar.com as video hosting platform, and they have a node.js package/API which I added to my Meteor project with meteorhacks:npm. Everything in the API works great, but when I upload a video, I need to fetch the video ID from the API when successfully uploaded.
Problem:
I need to save the video id returned from the vzaar API after uploading, but since it happens in the future, my code does not wait on the result and just gives me "undefined". Is it possible to make the Meteor.method wait for the response?
Here is my method so far:
Meteor.methods({
vzaar: function (videopath) {
api.uploadAndProcessVideo(videopath, function (statusCode, data) {
console.log("Video ID: " + data.id);
return data.id;
}, {
title: "my video",
profile: 3
});
console.log(videoid);
}
});
And this is how the Meteor.call looks like right now:
Meteor.call("vzaar", "/Uploads/" + fileInfo.name, function (err, message) {
console.log(message);
});
When I call this method, I immediately get undefined in browser console and meteor console, and after some seconds, I get the video ID in the meteor console.
Solution
I finally solved the problem, after days of trial and error. I learned about Fibers (here and here), and learned more about the core event loop of Node.js. The problem were that this call answered in the future, so my code always returned undefined because it ran before the api had answered.
I first tried the Meteor.wrapAsync, which I thought were going to work, as it is actually the Future fiber. But I ended up using the raw NPM module of Future instead. See this working code:
var Future = Npm.require('fibers/future');
Meteor.methods({
vzaar: function (videopath) {
var fut = new Future();
api.uploadAndProcessVideo(videopath, function (statusCode, data) {
// Return video id
fut.return (data.id);
}, {
// Video options
title: "hello world",
profile: 3
});
// The delayed return
return fut.wait();
}
});
Remember to install the npm module correctly with meteorhacks:npm first.
I learned about how to use the Future fiber in this case via this stackoverflow answer.
I hope this can be useful for others, as it was really easy to implement.