We have been working on a streaming application for raw H.264 and AAC content. We are using MediaStreamSource to feed samples to MediaElement and observe no issues when we use PC SilverLight (on IE9) for audio/video playback. Audio-only stream also works fine on WP7. However, we face following problems with video playback on WP7:
•When video stream attribute for MediaStreamSource is initialized without CodecPrivateData, MediaElement "Failed" event handler is called with error code 3100. Video Stream attribute is initialized as:
Dictionary<MediaStreamAttributeKeys, string> videoStreamAttributes = new Dictionary<MediaStreamAttributeKeys, string>();
videoStreamAttributes[MediaStreamAttributeKeys.VideoFourCC] = "H264";
this.videoStreamDescription = new MediaStreamDescription(MediaStreamType.Video, videoStreamAttributes);
•When video stream attribute for MediaStreamSource is initialized with CodecPrivateData ([start code] [sps] [startcode] [pps]) the video plays but seems to be playing at a much faster rate - 2 to 3 times the specified FPS. Video Stream attribute is initialized as:
Dictionary<MediaStreamAttributeKeys, string> videoStreamAttributes = new Dictionary<MediaStreamAttributeKeys, string>();
videoStreamAttributes[MediaStreamAttributeKeys.VideoFourCC] = "H264";
videoStreamAttributes[MediaStreamAttributeKeys.CodecPrivateData] = "000000012742000D96540A0FD8080F162EA00000000128CE060C88";
this.videoStreamDescription = new MediaStreamDescription(MediaStreamType.Video, videoStreamAttributes);
Note that the same streams play fine on PC SilverLight with and without CodecPrivateData with audio as well as video.
Is there something wrong in which video stream attribute is initialized? What could be causing this problem and how can we resolve it?
Regards,
NKS.
The problem here was the clock that was being used for the timestamp. Our application used to calculate the timestamp as per 90Khz, the expected timestamp was in terms of 1 Mhz. So all the frames appeared after the time was elapsed and hence the player would play the frames as fast as it could (I had seen something around 120 fps also). after fixing the timestamp clock, it works fine
Related
I am trying to send an MP4 video through Pion WebRTC to the browser.
Using FFmpeg, I split it into an Opus OGG stream and an Annex-B H.264 video stream. While the video works fine, the audio keeps cutting in and out. It plays fine for a few seconds, then stops for a second, and continues.
This is the FFmpeg command I use for audio:
ffmpeg -i demo.mp4 -c:a libopus -vn -page_duration 20000 demo.ogg
And this is my transmitter (shortened):
var lastGranule uint64
for {
pageData, pageHeader, err := ogg.ParseNextPage() // Uses Pion OggReader
// Taken from the play-from-disk example
sampleCount := float64(pageHeader.GranulePosition - lastGranule)
lastGranule = pageHeader.GranulePosition
sampleDuration := time.Duration((sampleCount/48000)*1000) * time.Millisecond
err = audioTrack.WriteSample(media.Sample{Data: pageData, Duration: sampleDuration})
util.HandleError(err)
time.Sleep(sampleDuration)
}
I tried hardcoding the delay to 15ms, which fixes the issue that it's cutting out, but then it randomly plays way too fast or starts skipping. Since I had glitchy video before updating my FFmpeg command (add keyframes and remove b-frames), I assume this is also an encoder problem.
What could be the cause for this?
Update: Using WebRTC logging in Chrome, I discovered the following log lines that occurred frequently:
[27216:21992:0809/141533.175:WARNING:rtcp_receiver.cc(452)] 30 RTCP blocks were skipped due to being malformed or of unrecognized/unsupported type, during the past 10 second period.
This is probably the reason for the cutouts, although I can't figure out why it receives malformed data.
The problem in the end was an inaccuracy in the Sleep time caused by issue #44343 in Go itself. It caused the samples not to be sent at a constant rate, but at a rate that randomly was between 5 and 15ms off, resulting in a choppy stream.
Sean DuBois and me fixed this in the latest play-from-disk and play-from-disk-h264 examples in the Pion repository by replacing the for-loop and Sleep() with a Ticker, which is more accurate.
I have an MP4 URL with only video and a separate audio track for it. I can play one or the other by changing the "main" stream URL and the corresponding content-type, but I want both, not one or the other obviously.
There is a core URL at (silly video) https://v.redd.it/3hyw7hwoajn21/DASHPlaylist.mpd
You can get the MP4 video only with audio at https://v.redd.it/3hyw7hwoajn21/DASH_720 and its corresponding audio track is at https://v.redd.it/3hyw7hwoajn21/audio
If I play the MP4 with the iOS SDK it works fine, but no audio:
let url = URL(string: "https://v.redd.it/3hyw7hwoajn21/DASH_720")!
let mediaInfoBuilder = GCKMediaInformationBuilder(contentURL: url)
mediaInfoBuilder.contentID = url.absoluteString
mediaInfoBuilder.streamType = .buffered
mediaInfoBuilder.streamDuration = TimeInterval(75)
mediaInfoBuilder.contentType = "video/mp4"
mediaInfoBuilder.metadata = metadata
let mediaInfo = mediaInfoBuilder.build()
So I try to add in the audio track before calling build(), attempting to follow the documentation here:
mediaInfoBuilder.mediaTracks = [GCKMediaTrack(identifier: 98911, contentIdentifier: nil, contentType: "audio/mp4", type: GCKMediaTrackType.audio, textSubtype: GCKMediaTextTrackSubtype.unknown, name: "Fun time fun", languageCode: "en", customData: nil)]
But the result is the same: no audio.
Am I doing this wrong?
The audio and video streams have to be in the same manifest for us to support it if not then this is not supported by SDK. In general, the hardware of ChromeCast is limited to only allow one mediaElement. Some apps managed to add sound effect while reading a book, which might use WebAudio, but that's completely done in app.
I'm using FFmpegFrameRecorder to create mp4(H264) video from camera preview. My recorder configuration is as follows.
recorder = new FFmpegFrameRecorder(filePath, width, height);
recorder.setVideoCodec(avcodec.AV_CODEC_ID_H264);
recorder.setFormat("mp4");
recorder.setFrameRate(VIDEO_FPS);
recorder.setVideoBitrate(16384);
recorder.setPixelFormat(avutil.AV_PIX_FMT_YUV420P);
For the rest I follows closely to the sample code RecordActivity.java and was able to verify that
recorder.record(yuvIplimage)
gets called 20 (or more) times, which should create an mp4 with 20 frames. However, the resulting mp4 files after open up only has 2 frames (two first frame of the preview)! I have no idea what have caused such behavior. Any help would be greatly appreciate. Thank you.
Long Le
I figured it out: the issue was because I didn't know what I was doing. I was new to javacv, and I was assuming, based on this stackoverflow entry, that the number of frames in the resulting video should be equal to the number of record() calls. However, this is not the case with video encoding, especially with H264. I figured this out by trying with MPEG4 encoding and there's definitely more than 2 frames. H264 seems to require a minimum number of input frames and hence is not suitable for short (<1 minute) video clips generation (which is my application). One solution is to switch to MPEG4 encoding. However, most browser that does play .mp4 files does not support MPEG4 encoding. Another the solution is to use H264 with minimize compression, by adding the following configuration
recorder.setVideoQuality(0); // maximum quality, replace recorder.setVideoBitrate(16384);
recorder.setVideoOption("preset", "veryfast"); // or ultrafast or fast, etc.
I am a newbie Mac programmer just for 3 months. I got a Audio Queue Services problem, hope anyone can help me.
I using Audio Queue Services API created a recording program, and output AAC format data. It's seems good, everything work fine.
Until I get to use the MP4V2 Library(an open source library) to output a .mp4 file, the problem is occur.
Problem 1:
I use magic cookie as a AAC header
to input to MP4V2 library function
MP4WriteSample(). Inside the .mp4 file has data, but the
player(ex:Quicktime) can't recognized
the .mp4 file, it can't play the audio data.
Problem 2:
I set my audio queue basic descriptions format in following :
aqData.mDataFormat.mSampleRate = 44100.0;
aqData.mDataFormat.mFormatID = kAudioFormatMPEG4AAC; // AAC codec.
aqData.mDataFormat.mChannelsPerFrame = 2;
aqData.mDataFormat.mFramesPerPacket = 1024;
and use AudioQueueGetProperty() to get magic cookie.
Than I print out my magic cookie contants, like that:
<03808080 22000000 04808080 14401500 18000001 f4000001 f4000580 80800212 10068080 800102>
total 39 Bytes.
What exactly it mean?
What the 39 Bytes each represented mean?
Can it convert to AAC header?
Reference :
Set a Magic Cookie for an Audio File
Set a Magic Cookie for a Playback Audio Queue
CoreAudio - how to determine the end of the playing aac file
Thanks a lot.
Ryan
set file type to kAudioFileM4AType
AudioFileCreateWithURL (
audioFileURL,
kAudioFileM4AType,
&audioFormat,
kAudioFileFlags_EraseFile,
&audioFileID
);
I've been playing with QTKit for a couple of days and I'm successfully able to record video data to a file from the camera using a QTCaptureSession and QTCaptureDeviceInput etc.
However what I want to do is send the data to another location, either over the network or to a different object within the same app (it doesn't matter) and then play the video data as if it were a stream.
I have a QTCaptureMovieFileOutput and I am passing nil as the file URL so that it doesn't actually record the data to the file (I'm only interested in the data contained in the QTSampleBuffer that is available via the delegate callback).
I have set a QTCompressionOptions object on the output specifying H264 Video and High Quality AAC Audio compression.
Each time I receive a call back, I append the data from the sample buffer into an NSMutableData object I have as an instance variable.
The problem I have is that no 'player' object in the QTKit framework seems capable of playing a 'stream' of video data. Am I correct in this assumption?
I tried creating a QTMovie object (to play in a QTMovieView) using my data instance variable but I get the error that the data is not a movie.
Am I approaching this issue from the wrong angle?
Previously I was using a QTCapturePreviewOutput which passes CVImageBufferRefs for each video frame. I was converting these frames into NSImages to display on a view.
While this gave the impression of streaming, it was slow and processor hungry.
How have other people conquered the streaming video problem?
Seems to me like you'd need to make a GL texture and then load the data into it on a per-frame basis. Everything about QTMovie seems to be based on pre-existing files, as far as my little mind can tell.