I'm trying to capture from a webcam source to a file using source reader in asynchronous mode. It goes well for few seconds and video is recorded with 30 fps but then abruptly the FPS drops to 0.
I've tried enabling and disabling MF_SINK_WRITER_DISABLE_THROTTLING in SinkWriter. For some reason, it seems to be blocking input. Also, the method OnReadSample stops being called.
Here's the OnReadSample method:
EnterCriticalSection(&m_critsec);
if (SUCCEEDED(hrStatus))
{
if (pSample)
{
if (audio)
{
WriteLogFile(L"audio: # %I64d\n", llTimestamp);
}
else
{
WriteLogFile(L"video: # %I64d\n", llTimestamp);
if (SUCCEEDED(hrStatus))
{
hrStatus = pSample->SetSampleTime(llTimestamp);
}
if (SUCCEEDED(hrStatus))
{
hrStatus = pSample->SetSampleDuration(myVideoRecorder.CaptureParams.VIDEO_FRAME_DURATION);
}
// Send the sample to the Sink Writer.
if (pWriter)
if (SUCCEEDED(hrStatus))
{
hrStatus = pWriter->WriteSample(streamIndex, pSample);
}
}
}
EDIT:
Apparently, I was setting up sink writer to accept audio and video but only supplying video samples for now. Supplying audio samples as well fixed it.
Related
This question is about running a non-blocking, high-performance activity in nativescript that is needed for the simple task of reading and saving raw audio from the microphone by directly accessing the hardware through the native Android API. I believe I have brought the nativescript framework to the edge of its capabilities, and I need experts' help.
I'm building a WAV audio recorder in Nativescript Android. Native implementation is described here (relevant code below).
In short, this can be done by reading audio steam from an android.media.AudioRecord buffer, and then writing the buffer to a file in a separate thread, as described:
Native Android implementation
startRecording() is triggered by a button press, and starts a new Thread that runs writeAudioDataToFile():
private void startRecording() {
// ... init Recorder
recorder.startRecording();
isRecording = true;
recordingThread = new Thread(new Runnable() {
#Override
public void run() {
writeAudioDataToFile();
}
}, "AudioRecorder Thread");
recordingThread.start();
}
Recording is stopped by setting isRecording to false (stopRecording() is triggered by a button press):
private void stopRecording() {
isRecording = false;
recorder.stop();
recorder.release();
recordingThread = null;
}
Reading and saving buffer is stopped if isRecording = false:
private void writeAudioDataToFile() {
// ... init file and buffer
ByteArrayOutputStream recData = new ByteArrayOutputStream();
DataOutputStream dos = new DataOutputStream(recData);
int read = 0;
while(isRecording) {
read = recorder.read(data, 0, bufferSize);
for(int i = 0; i < bufferReadResult; i++) {
dos.writeShort(buffer[i]);
}
}
}
My Nativescript javascript implementation:
I wrote a nativescript typescript code that does the same as the native Android code above. The problem #1 I faced was that I can't run while(isRecording) because the javascript thread would be busy running inside the while loop, and would never be able to catch button clicks to run stopRecording().
I tried to solve problem #1 by using setInterval for asynchronous execution, like this:
startRecording() is triggered by a button press, and sets a time interval of 10ms that executes writeAudioDataToFile():
startRecording() {
this.audioRecord.startRecording();
this.audioBufferSavingTimer = setInterval(() => this.writeAudioDataToFile(), 10);
}
writeAudioDataToFile() callbacks are queued up every 10ms:
writeAudioDataToFile() {
let bufferReadResult = this.audioRecord.read(
this.buffer,
0,
this.minBufferSize / 4
);
for (let i = 0; i < bufferReadResult; i++) {
dos.writeShort(buffer[i]);
}
}
Recording is stopped by clearing the time interval (stopRecording() is triggered by button press):
stopRecording() {
clearInterval(this.audioBufferSavingTimer);
this.audioRecord.stop();
this.audioRecord.release();
}
Problem #2: While this works well, in many cases it makes the UI freeze for 1-10 seconds (for example after clicking a button to stop recording).
I tried to change the time interval that executes writeAudioDataToFile() from 10ms to 0ms and up to 1000ms (while having a very big buffer), but then the UI freezes were longer and, and I experienced loss in the saved data (buffered data that was not saved to the file).
I tried to offload this operation to a separate Thread by using a nativescript worker thread as described here, where startRecording() and stopRecording() are called by messages sent to the thread like this:
global.onmessage = function(msg) {
if (msg.data === 'startRecording') {
startRecording();
} else if (msg.data === 'stopRecording') {
stopRecording();
}
}
This solved the UI problem, but created problem #3: The recorder stop was not executed on time (i.e. recording stops 10 to 50 seconds after the 'stopRecording' msg.data is received by the worker thread). I tried to use different time intervals in the setInterval inside the worker thread (0ms to 1000ms) but that didn't solve the problem and even made stopRecording() be executed with greater delays.
Does anyone have an idea of how to perform such a non-blocking high-performance recording activity in nativescript/javascript?
Is there a better approach to solve problem #1 (javascript asynchronous execution) that I described above?
Thanks
I would keep the complete Java implementation in actual Java, you can do this by creating a java file in your plugin folder:
platforms/android/java, so maybe something like:
platforms/android/java/org/nativescript/AudioRecord.java
In there you can do everything threaded, so you won't be troubled by the UI being blocked. You can call the Java methods directly from NativeScript for starting and stopping the recording. When you build your project, the Java file will automatically be compiled and included.
You can generate typings from your Java class by grabbing classes.jar from the generated .aar file of your plugin ({plugin_name}.aar) and generate type declarations for it: https://docs.nativescript.org/core-concepts/android-runtime/metadata/generating-typescript-declarations
This way you have all the method/class/type information available in your editor.
I am trying to play audio file whenever my chatbot gives a response.I have created an API where my audio file is getting saved and calling it in an ajax call on each bot response.its working fine when single bot response is coming. but problem arise with multiple response, audio gets overlapped means 1st audio is not finished we get the second response and it is also getting played giving mix of both audios.I somehow want to separate this audio and wanted to play it sequentially one after another.
React code :
export default class App extends React.Component {
state = {
audio : new Audio
}
if (replyType.username === "bot" )
{
axios.get("https://alpha.com/call_tts/?message="+replyType.message.text)
.then(res => {
const posts = res;
console.log("ajax response success")
this.setState({
audio : new Audio("https://alpha.com/media/final_x.wav")
});
this.state.audio.play()
});
}
}
}
It doesn't describe the problem statement completely but looking at the code, the behaviour is expected.
You have to add the code to que the audio instead of playing it immediately.
So, create a que and store the loaded audio in que. Instead of playing it immediately, check if any audio is already playing, if so, wait for it to finish (need to add listeners). Once that finishes, pop it from que and play the next qued item.
More code and details on the problem statement will help better.
I'm trying to do a simple think: when a button is pressed i load a video using the processing video library, each button is associated with a different video, for example button 1 with video 1, button 2 with video 2, and so on. The code works but every time I call a video, also the same i have already load, rewriting the gloabal variable the consume of CPU grows, reaching the 40% after the thrid loading, after 7 video the consume of CPU is near the 100%. An extraction of the code:
import processing.video.*;
Movie movie;
void setup() {
size(1280, 720, P3D);
background(0);
}
void draw() {
//image(movie, 0, 0, width, height);
if (but1_1==1) {
println("video 1");
movie = new Movie(this, "1.mp4"));
movie.loop();
movie.volume(0);
}
if (but1_2==1) {
println("video 2");
movie = new Movie(this, "2.mp4"));
movie.loop();
movie.volume(0);
}
if (but1_3==1) {
println("video 3");
movie = new Movie(this, "3.mp4"));
movie.loop();
movie.volume(0);
}
}
As you can see, it should not be any reason in based on which the CPU consume grows: the instantiated object movie is always rewritten every time a new video (or the same) is loaded. Any suggestions?
You are loading the movies in loop, which means they don't stop. So the more buttons you press, the more videos are processed at the same time. On every button press, you should stop the movie-playing-process of the old movie first, before you start a new one.
I am playing several sounds, which each dim the background audio. When they are done, I restore background audio. What happens is every time one of the audio files play, the background dims (as desired). When the last audio finishes playing the background audio is restored (also desired). However after about 5 seconds, it throws this error and dims the audio again (not what I want since all sounds are now finished).
ERROR: [0x19c9af310] AVAudioSession.mm:646: -[AVAudioSession
setActive:withOptions:error:]: Deactivating an audio session that has
running I/O. All I/O should be stopped or paused prior to deactivating
the audio session.
To my knowledge I am stopping and removing all audio.
There is 1 post I found here:
iOS8 AVAudioSession setActive error
But the solution does not work for me. Here is my audio player class. If you can advise what might be up I'd appreciate it.
import Foundation
import AVFoundation
private var _singleton:O_Audio? = O_Audio()
private var _avAudioPlayers:Array<AVAudioPlayer> = []
//Manages dimming and resuming background audio
class O_Audio:NSObject, AVAudioPlayerDelegate
{
class var SINGLETON:O_Audio
{
if(_singleton == nil)
{
_singleton = O_Audio()
}
return _singleton!
}
class func dimBackgroundAudio()
{
AVAudioSession.sharedInstance().setActive(true, error: nil)
}
class func restoreBackgroundAudio()
{
AVAudioSession.sharedInstance().setActive(false, error: nil)
}
class func playSound(path:String)
{
var sound = NSURL(fileURLWithPath: NSBundle.mainBundle().pathForResource(path, ofType: "m4a")!)
var audioPlayer = AVAudioPlayer(contentsOfURL: sound, error: nil)
_avAudioPlayers.append(audioPlayer)
audioPlayer.delegate = O_Audio.SINGLETON
audioPlayer.prepareToPlay()
audioPlayer.play()
}
func audioPlayerDidFinishPlaying(player: AVAudioPlayer!, successfully flag: Bool)
{
//this was from the 1 stack post I found but these
//two lines do not solve my problem
player.stop()
player.prepareToPlay()
var index:Int!
for i in 0..._avAudioPlayers.count - 1
{
if(_avAudioPlayers[i] == player)
{
index = i
break
}
}
_avAudioPlayers.removeAtIndex(index)
if(_avAudioPlayers.count == 0)
{
O_Audio.restoreBackgroundAudio()
}
}
func audioPlayerDecodeErrorDidOccur(player: AVAudioPlayer!, error: NSError!)
{
println("error")
}
}
Important Update
So I've found what I think is a rough cause of the issue. Our App is built on Cordova. So we have a lot of Safari (browser) calls. And this bug is occurring whenever we play a video (which is played via Safari). It seems like Safari is somehow dimming the audio and keeping a running I/O thread.
The issue is the fact that an MPMoviePlayerController object is playing. In fact Any AVPlayerItem causes this. If you play a movie, and try to dim audio you will get this error.
At present, any movie played on iOS has an un-mutable audio track (even if there is no audio on the movie file). That permanently causes the duck issue (its a bug in the source code). I tried many workarounds and nothing worked. I am certain this is an Xcode source bug.
#Aggressor: you can change the audio category to multi route so that the audio plays through speaker and headphones(if plugged in) at the same time.
By this way you won't get the dimmed audio.
I have the same issue.
You must call AVAudioSession.sharedInstance().setActive(true, error: nil); before AVAudioSession.sharedInstance().setActive(false, error: nil);
And they are appear in pairs.
I had the same issue. My app has a video playback (without sound) using AVPlayer, and occasionally short spoken audio using AVQueuePlayer while the video is still playing. Now if I had music playing in the background (i.e. via Spotify) the music ducked while the AVQueuePlayer was playing, un-ducked for a brief moment after it finished and then ducked again.
I created a sample app to debug my issue and found that my AVAudioSession setup was just not handled correctly. What worked for me in the end was the following:
At app start set AVAudioSession.CategoryOptions to .ambient like so:
try AVAudioSession.sharedInstance().setCategory(.ambient, options: .mixWithOthers)
this will prevent the video player from interrupting the music.
If I play/pause my AVQueuePlayer I change my category again using this functions:
func setAudioSession(to category: Category, activate: Bool = true) {
switch category {
case .default:
// .ambient and .mixWithOthers so silent video won't interrupt background music
setCategory(.ambient, options: .mixWithOthers, activate: activate)
case .voiceOver:
// .playback with .duckOthers will duck the music momentarily while short lasting audio is output, spoken audio will be interrupted by setting . interruptSpokenAudioAndMixWithOthers
setCategory(.playback, options: [.duckOthers, .interruptSpokenAudioAndMixWithOthers], activate: activate)
case .standalone:
// For standalone audio use .playback without options so it will interrupt other background music, if you don't want the sound to play while the phone is on mute use .ambient instead
setCategory(.playback, activate: activate)
}
}
private func setCategory(_ category: AVAudioSession.Category, options: AVAudioSession.CategoryOptions = [], activate: Bool) {
let session = AVAudioSession.sharedInstance()
// Only set if needed
guard session.category != category || session.categoryOptions != options else { return }
// To change category, first setActive(false) -> change -> setActive(true)
try? AVAudioSession.sharedInstance().setActive(false)
try? AVAudioSession.sharedInstance().setCategory(category, options: options)
if activate {
try? AVAudioSession.sharedInstance().setActive(true)
}
}
enum Category {
case `default`, voiceOver, standalone
}
So basically for starting audio voice over
#IBAction func playAudio(_ sender: UIButton) {
self.setAudioSession(to: .voiceOver)
audioPlayer.load(url: Bundle.main.url(forResource: "audio", withExtension: "mp3")!)
audioPlayer.play()
}
and then to pause (or after queue finishes):
#IBAction func pauseAudio(_ sender: UIButton) {
audioPlayer.pause()
self.setAudioSession(to: .default)
}
⚠️ Disclaimer: Notice how i do try? instead of do try catch block, this is because I'm still getting the Deactivating an audio session that has running I/O. error. So if i would catch the error, it wouldn't active the correct session anymore. It's still not ideal because it will always output the error but at least ducking works like a charm now.
Trying to send a data to another machine. I use C++Builder XE6 and Indy10.
TMemoryStream *sms = new TMemoryStream();
sms->Write(msgData, msgSize);
Form1->IdTCPClient1->IOHandler->WriteBufferOpen();
Form1->IdTCPClient1->IOHandler->Write(sms, 0, true);
Form1->IdTCPClient1->IOHandler->WriteBufferFlush();
delete sms;
When I checked send data with Wireshark, I sent data that additional data and msgData to the machine.
The additional data is " 00 00 00 20 ", and it is at head of sending data.
Does IdTcpClient usually send additional data like this?
It is sending extra data because you are setting the AWriteByteCount parameter of Write(TStream) to true. 00 00 00 20 is the stream size in network byte order. msgSize is 0x00000020, aka 32. If you do not want the stream size sent, you need to set the AWriteByteCount parameter to false instead:
Form1->IdTCPClient1->IOHandler->Write(sms, 0, false);
Also, you should be using WriteBufferClose() instead of WriteBufferFlush(), and do not forget to call WriteBufferCancel() if Write() raises an exception. WriteBufferClose() sends the buffered data to the socket and then closes the buffer so subsequent writes are not buffered. WriteBufferFlush() sends the buffered data to the socket, but does not close the buffer, thus subsequent writes will be buffered.
Also, you can simplify the overhead a little by replacing TMemoryStream with TIdMemoryBufferStream, so that you do not have to make a separate copy of your message data in memory:
TIdMemoryBufferStream *sms = new TIdMemoryBufferStream(msgData, msgSize);
try
{
Form1->IdTCPClient1->IOHandler->WriteBufferOpen();
try
{
Form1->IdTCPClient1->IOHandler->Write(sms, 0, true); // or false
Form1->IdTCPClient1->IOHandler->WriteBufferClose();
}
catch (const Exception &)
{
Form1->IdTCPClient1->IOHandler->WriteBufferCancel();
throw;
}
}
__finally
{
delete sms;
}
Alternatively, use a RAII approach:
class BufferIOWriting
{
private:
TIdIOHandler *m_IO;
bool m_Finished;
public:
BufferIOWriting(TIdIOHandler *aIO) : m_IO(aIO), m_Finished(false)
{
IO->WriteBufferOpen();
}
~BufferIOWriting()
{
if (m_Finished)
m_IO->WriteBufferClose();
else
m_IO->WriteBufferCancel();
}
void Finished()
{
m_Finished = true;
}
};
{
std::auto_ptr<TIdMemoryBufferStream> sms(new TIdMemoryBufferStream(msgData, msgSize));
BufferIOWriting buffer(Form1->IdTCPClient1->IOHandler);
Form1->IdTCPClient1->IOHandler->Write(sms.get(), 0, true); // or false
buffer.Finished();
}
With that said, I would suggest just getting rid of write buffering altogether:
TIdMemoryBufferStream *sms = new TIdMemoryBufferStream(msgData, msgSize);
try
{
Form1->IdTCPClient1->IOHandler->Write(sms, 0, true); // or false
}
__finally
{
delete sms;
}
Or:
{
std::auto_ptr<TIdMemoryBufferStream> sms(new TIdMemoryBufferStream(msgData, msgSize));
Form1->IdTCPClient1->IOHandler->Write(sms.get(), 0, true); // or false
}
Write buffering is useful when you need to make multiple related Write() calls that should have their data transmitted together in as few TCP frames as possible (in other words, letting the Nagle algorithm do its job better). For instance, if you need to Write() individual fields of your message. Write buffering does not make much sense to use when making a single Write() call, especially of such a small size. Let Write(TStream) handle its own buffering internally for you.