From a quick analysis of the qual-e mse conformance tests it seems to me that it is assumed that the platform supports VP9.
For example the test case for MSE AddSourceBuffer
var testAddSourceBuffer = createConformanceTest('AddSourceBuffer', 'MSE Core');
testAddSourceBuffer.prototype.title =
'Test if we can add source buffer';
testAddSourceBuffer.prototype.onsourceopen = function() {
try {
this.runner.checkEq(this.ms.sourceBuffers.length, 0, 'Source buffer number');
this.ms.addSourceBuffer(Media.AAC.mimetype);
this.runner.checkEq(this.ms.sourceBuffers.length, 1, 'Source buffer number');
this.ms.addSourceBuffer(Media.VP9.mimetype);
this.runner.checkEq(this.ms.sourceBuffers.length, 2, 'Source buffer number');
} catch (e) {
this.runner.fail(e);
}
this.runner.succeed();
};
From my understanding this test will fail if browser does not support VP9. Is my understanding correct?
I understand that VP9 is generally required for 2017 and 2018 certification.
Will it be possible to have a test page for a preliminary test for browsers that only currently support AVC?
Thanks for the suggestion. We will take this proposal into consideration.
FYI, VP9 has not been optional since 2016 year certification. Cobalt 11 has software VP9 implementation so that you can reuse that for the preliminary tests.
Related
For some reason DuplicateOutput1 fails where DuplicateOutput does not.
#include <D3D11.h>
#include <DXGI1_5.h>
int main() {
ID3D11Device *device;
D3D_FEATURE_LEVEL levels[] = { D3D_FEATURE_LEVEL_11_1 };
D3D11CreateDevice(NULL, D3D_DRIVER_TYPE_HARDWARE, NULL, 0, levels, ARRAYSIZE(levels), D3D11_SDK_VERSION, &device, NULL, NULL);
IDXGIDevice *dxDevice;
device->QueryInterface<IDXGIDevice>(&dxDevice);
IDXGIAdapter *adapter;
dxDevice->GetAdapter(&adapter);
IDXGIOutput *output;
adapter->EnumOutputs(0, &output);
IDXGIOutput5 *output5;
output->QueryInterface<IDXGIOutput5>(&output5);
IDXGIOutputDuplication *outputDuplication;
auto hr1 = output5->DuplicateOutput(device, &outputDuplication);
S_OK here
const DXGI_FORMAT formats[] = { DXGI_FORMAT_B8G8R8A8_UNORM };
auto hr2 = output5->DuplicateOutput1(device, 0, ARRAYSIZE(formats), formats, &outputDuplication);
}
0x887a0004 : The specified device interface or feature level is not supported on this system.
I will post here the answer from #weggo, because I almost missed it!
For those that might stumble upon this in the future, calling
SetProcessDpiAwarenessContext(DPI_AWARENESS_CONTEXT_PER_MONITOR_AWARE_V2)
allows the DuplicateOutput1 to succeed. I have no idea why the
DuplicateOutput1 checks the process dpi version, though.
I will just add that you have to set DPI awareness to False in properties of the solution in manifest settings, to get the SetProcessDpiAwarenessContext to work :)
This could happen if you run on a system with both an integrated graphics chip and a discrete GPU. See https://support.microsoft.com/en-us/kb/3019314:
unfortunately this issue occurs because the Desktop Duplication API does not support being run against the discrete GPU on a Microsoft Hybrid system. By design, the call fails together with error code DXGI_ERROR_UNSUPPORTED in such a scenario.
To work around this issue, run the application on the integrated GPU instead of on the discrete GPU on a Microsoft Hybrid system.
It seems that MediaSource and Progressive playback use the different demuxer. ChunkDemuxer is used for MediaSource, ShellDemuxer is used for Progressive playback.
In ShellParser.cpp implementation:
PipelineStatus ShellParser::Construct(
scoped_refptr<ShellDataSourceReader> reader,
scoped_refptr<ShellParser>* parser,
const scoped_refptr<MediaLog>& media_log) {
DCHECK(parser);
DCHECK(media_log);
*parser = NULL;
// download first 16 bytes of stream to determine file type and extract basic
// container-specific stream configuration information
uint8 header[kInitialHeaderSize];
int bytes_read = reader->BlockingRead(0, kInitialHeaderSize, header);
if (bytes_read != kInitialHeaderSize) {
return DEMUXER_ERROR_COULD_NOT_PARSE;
}
// attempt to construct mp4 parser from this header
return ShellMP4Parser::Construct(reader, header, parser, media_log);
}
It seems that Cobalt can only demux MP4 container(Only ShellMP4Parser) for progressive playback.
Is it known status for Cobalt ?how can we support webm progressive playback on the device?
Cobalt will not support WebM/VP9 progressive playback. We changed the Progressive Conformance test to change VP9 to H264. This will be pushed soon.
https://github.com/youtube/js_mse_eme/commit/d7767e13be7ed8b8bdb2efda39337a4a2fb121ba
I'm developing an application that needs to publish a media stream to an rtmp "ingestion" url (as used in YouTube Live, or as input to Wowza Streaming Engine, etc), and I'm using the ffmpeg library (programmatically, from C/C++, not the command line tool) to handle the rtmp layer. I've got a working version ready, but am seeing some problems when streaming higher bandwidth streams to servers with worse ping. The problem exists both when using the ffmpeg "native"/builtin rtmp implementation and the librtmp implementation.
When streaming to a local target server with low ping through a good network (specifically, a local Wowza server), my code has so far handled every stream I've thrown at it and managed to upload everything in real time - which is important, since this is meant exclusively for live streams.
However, when streaming to a remote server with a worse ping (e.g. the youtube ingestion urls on a.rtmp.youtube.com, which for me have 50+ms pings), lower bandwidth streams work fine, but with higher bandwidth streams the network is underutilized - for example, for a 400kB/s stream, I'm only seeing ~140kB/s network usage, with a lot of frames getting delayed/dropped, depending on the strategy I'm using to handle network pushback.
Now, I know this is not a problem with the network connection to the target server, because I can successfully upload the stream in real time when using the ffmpeg command line tool to the same target server or using my code to stream to a local Wowza server which then forwards the stream to the youtube ingestion point.
So the network connection is not the problem and the issue seems to lie with my code.
I've timed various parts of my code and found that when the problem appears, calls to av_write_frame / av_interleaved_write_frame (I never mix & match them, I am always using one version consistently in any specific build, it's just that I've experimented with both to see if there is any difference) sometimes take a really long time - I've seen those calls sometimes take up to 500-1000ms, though the average "bad case" is in the 50-100ms range. Not all calls to them take this long, most return instantly, but the average time spent in these calls grows bigger than the average frame duration, so I'm not getting a real time upload anymore.
The main suspect, it seems to me, could be the rtmp Acknowledgement Window mechanism, where a sender of data waits for a confirmation of receipt after sending every N bytes, before sending any more data - this would explain the available network bandwidth not being fully used, since the client would simply sit there and wait for a response (which takes a longer time because of the lower ping), instead of using the available bandwidth. Though I haven't looked at ffmpeg's rtmp/librtmp code to see if it actually implements this kind of throttling, so it could be something else entirely.
The full code of the application is too much to post here, but here are some important snippets:
Format context creation:
const int nAVFormatContextCreateError = avformat_alloc_output_context2(&m_pAVFormatContext, nullptr, "flv", m_sOutputUrl.c_str());
Stream creation:
m_pVideoAVStream = avformat_new_stream(m_pAVFormatContext, nullptr);
m_pVideoAVStream->id = m_pAVFormatContext->nb_streams - 1;
m_pAudioAVStream = avformat_new_stream(m_pAVFormatContext, nullptr);
m_pAudioAVStream->id = m_pAVFormatContext->nb_streams - 1;
Video stream setup:
m_pVideoAVStream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
m_pVideoAVStream->codecpar->codec_id = AV_CODEC_ID_H264;
m_pVideoAVStream->codecpar->width = nWidth;
m_pVideoAVStream->codecpar->height = nHeight;
m_pVideoAVStream->codecpar->format = AV_PIX_FMT_YUV420P;
m_pVideoAVStream->codecpar->bit_rate = 10 * 1000 * 1000;
m_pVideoAVStream->time_base = AVRational { 1, 1000 };
m_pVideoAVStream->codecpar->extradata_size = int(nTotalSizeRequired);
m_pVideoAVStream->codecpar->extradata = (uint8_t*)av_malloc(m_pVideoAVStream->codecpar->extradata_size + AV_INPUT_BUFFER_PADDING_SIZE);
// Fill in the extradata here - I'm sure I'm doing that correctly.
Audio stream setup:
m_pAudioAVStream->time_base = AVRational { 1, 1000 };
// Let's leave creation of m_pAudioCodecContext out of the scope of this question, I'm quite sure everything is done right there.
const int nAudioCodecCopyParamsError = avcodec_parameters_from_context(m_pAudioAVStream->codecpar, m_pAudioCodecContext);
Opening the connection:
const int nAVioOpenError = avio_open2(&m_pAVFormatContext->pb, m_sOutputUrl.c_str(), AVIO_FLAG_WRITE);
Starting the stream:
AVDictionary * pOptions = nullptr;
const int nWriteHeaderError = avformat_write_header(m_pAVFormatContext, &pOptions);
Sending a video frame:
AVPacket pkt = { 0 };
av_init_packet(&pkt);
pkt.dts = nTimestamp;
pkt.pts = nTimestamp;
pkt.duration = nDuration; // I know what I have the wrong duration sometimes, but I don't think that's the issue.
pkt.data = pFrameData;
pkt.size = pFrameDataSize;
pkt.flags = bKeyframe ? AV_PKT_FLAG_KEY : 0;
pkt.stream_index = m_pVideoAVStream->index;
const int nWriteFrameError = av_write_frame(m_pAVFormatContext, &pkt); // This is where too much time is spent.
Sending an audio frame:
AVPacket pkt = { 0 };
av_init_packet(&pkt);
pkt.pts = m_nTimestampMs;
pkt.dts = m_nTimestampMs;
pkt.duration = m_nDurationMs;
pkt.stream_index = m_pAudioAVStream->index;
const int nWriteFrameError = av_write_frame(m_pAVFormatContext, &pkt);
Any ideas? Am I on the right track with thinking about the Acknowledgement Window? Am I doing something else completely wrong?
I don't think this explains everything, but, just in case, for someone in a similar situation, the fix/workaround I found was:
1) build ffmpeg with the librtmp implementation of the rtmp protocol
2) build ffmpeg with --enable-network, it adds a couple of features to the librtmp protocol
3) pass "rtmp_buffer_size" parameter to avio_open2, and increase it's value to a satisfactory one
I can't give you a full step-by-step explanation of what was going wrong, but this fixed at least the symptom that was causing me problems.
Since firefox 37 I cannot add volume control to the input(microphone), i get the error :
IndexSizeError: Index or size is negative or greater than the allowed amount
It works fine on Chrome.
Here is the code sample :
var audioContext = new (window.AudioContext || window.webkitAudioContext)(); // define audio context
var microphone = audioContext.createMediaStreamDestination();
var gain = audioContext.createGain();
var speaker = audioContext.createMediaStreamDestination(gain);
gain.gain.value = 1;
microphone.connect(gain);
gain.connect(speaker);
The error is thrown here :
microphone.connect(gain);
weirdly it works on firefox nightly.
This error is similar to this stackoverflow :link
Related link :
link on StackOverflow
Shouldn't you use this for microphone?
var microphone = audioContext.createMediaStreamSource();
instead of this
var microphone = audioContext.createMediaStreamDestination();
A microphone is not a destination. It is a source.
Firstly I think it should be
var microphone = audioContext.createMediaStreamSource(stream);
Here stream is the microphone audio stream. Find more info here.
Also check out this demo with elaboration here. It is similar to what you are trying. Replace createMediaElementSource with createMediaStreamSource will work.
I am using the following code snippets to record screen, and in most situations recorded wmv file is clear enough, but for some part of video it is not very clear (grey color for some parts). What I record is ppt with full screen mode. I am using Windows Media Encoder 9.
Here is my code snippet,
IWMEncSourceGroup SrcGrp;
IWMEncSourceGroupCollection SrcGrpColl;
SrcGrpColl = encoder.SourceGroupCollection;
SrcGrp = (IWMEncSourceGroup)SrcGrpColl.Add("SG_1");
IWMEncVideoSource2 SrcVid;
IWMEncSource SrcAud;
SrcVid = (IWMEncVideoSource2)SrcGrp.AddSource(WMENC_SOURCE_TYPE.WMENC_VIDEO);
SrcAud = SrcGrp.AddSource(WMENC_SOURCE_TYPE.WMENC_AUDIO);
SrcVid.SetInput("ScreenCap://ScreenCapture1", "", "");
SrcAud.SetInput("Device://Default_Audio_Device", "", "");
// Specify a file object in which to save encoded content.
IWMEncFile File = encoder.File;
string CurrentFileName = Guid.NewGuid().ToString();
File.LocalFileName = CurrentFileName;
CurrentFileName = File.LocalFileName;
// Choose a profile from the collection.
IWMEncProfileCollection ProColl = encoder.ProfileCollection;
IWMEncProfile Pro;
for (int i = 0; i < ProColl.Count; i++)
{
Pro = ProColl.Item(i);
if (Pro.Name == "Screen Video/Audio High (CBR)")
{
SrcGrp.set_Profile(Pro);
break;
}
}
encoder.Start();
thanks in advance,
George
I would guess that it's a problem with your encoder profile or settings, and not a problem with the code. If you're using the default "Screen Video/Audio High (CBR)" profile in WME9, it's using a video bitrate of 250Kbps, which is pretty low. I'd suggest creating a custom profile in the Windows Media Encoder Profile Editor Utility. Something like this:
awesomesc.prx
Name: Awesome Screen Profile
Audio: WMA 9.2 CBR (32kbps, 44kHz, mono CBR)
Video: WMV 9 Screen Quality VBR (Video size Same as video input, Frame rate 10fps, Key frame interval 3sec, Video quality 90)
Then just change the code to match the custom profile's name.
if (Pro.Name == "Awesome Screen Profile")
The encoder settings would take a much longer post to go through, but if you have not changed them from the defaults, you should be OK.
The Quality-based VBR algorithm can be pretty amazing, and will likely produce a surprisingly low average bitrate, but if VBR won't work for your needs, you can use the Windows Media Encoder Profile Editor utility to import the schia.prx profile that you're using and tweak the settings to find a higher CBR bitrate that produces acceptable quality.
"Screen Video/Audio Medium (CBR)"
it solved my problem