feature detection for accepting timeslice in mediaRecorder.start(timeslice) - firefox

Many browsers' Media Stream Recording API implementations offer a MediaRecorder class to accept streams from getUserMedia, compress them, and deliver them as Blobs. MediaRecorder offers a start(timeslice) method. It starts compressing the stream, then calls an ondataavailable handler about every timeslice milliseconds.
That's the theory, at least. But some browsers (I'm looking at you, Firefox) only call the handler every half-second or even whole second, the requested timeslice value notwithstanding. My particular application needs a shorter latency, so it can't use browsers with that defect.
Is there a clean feature-detection way to figure this out quickly? Or do I have to look at the User-Agent string?

Related

Control Chromecast buffering at start

Is there a way to control the amount of buffering CC devices do before they start playback?
My sender apps sends real time audio flac and CC waits +10s before starting to play. I've built a customer receiver and tried to change the autoPauseDuration and autoResumeDuration but it does not seem to matter. I assume it's only used when an underflow event happens, but not at startup.
I realize that forcing a start with low buffering level might endup in underflow, but that's a "risk" that is much better than always waiting such a long time before playback starts. And if it happens, the autoPause/Resume hysteresis would allow a larger re-buffering to take place then.
If you are using the Media Player Library, take a look at player.getBufferDuration. The docs cover more details about how you can customize the player behavior: https://developers.google.com/cast/docs/player#frequently-asked-questions
Finally, it turned to be a problem with the way to send audio to the default receiver. I was streaming flac, and as it is a streamable format, I did not include any header (you might be able to start anywhere in the stream, it's just a matter of finding the synchro). But the flac decoder in the CC does not like that and was talking 10+ second to start. As soon as I've added a STREAMINFO header, problem went away

Best way to handle buffer under-runs?

I'm implementing the media handlers in Starboard, and I'm running into a situation where my client application in Cobalt doesn't buffer content aggressively enough. This results in it just idling with an empty buffer. What is the proper Starboard event to trigger when the platform's buffer is depleted? Should I be bubbling up an error somehow, or is there a signal I can give the client app to request more data?
When there is an underrun, the player implementation should handle it by pausing the video playback internally. To the end user the media playback is paused while the state of the media stack is still considered as "playing". This gives the player a chance to receive some video data before resuming playback again. In the reference implementation the PlayerWorker achieves this by pausing audio playback. As the media time and video playback are linked to the audio time, the whole player is paused.
When new data comes, the player should resume playback automatically. The player implementation may also choose to increase the amount of buffer required for preroll/resuming to avoid future underruns but this is usually not required.
As you mentioned that your app constantly runs into underruns. It is great to solve this for a better user experience even if underrun can be handled properly.
The first thing I'd check is that the test environment has enough network bandwidth for the requested video quality. If the app is targeted to a market with very poor network, consider buffer more media data.
If the app underruns when there is enough network bandwidth, it indicates that the media data is not processed fast enough. A good way is to check if kSbPlayerDecoderStateNeedsData is fired frequent enough and SbPlayerWriteSample() are called without much delay as this is the only place that moves media data across Starboard boundary.

AudioQueueOutputCallback not called at first

My question may be similar to this: Why might my AudioQueueOutputCallback not be called?
It seems that person was able to fix by running audio stuff on main thread. I cannot do that.
I enqueue buffers to prime audio Q, then start audio Q. Shouldn't those buffers complete immediately once I start my queue?
I am setting the data size correctly.
As a hack I just re-use buffers without waiting for them to be reported by cabllback as done. If I do this, I run for a couple of seconds like this, then the buffer callback starts working from them on.
definitely not a good idea to hack your way around with core audio.. while it may be a quick fix, it will definitely hurt you in ambiguous ways in the long run.
your problem isn't the same as the link you posted, their problem was assigning the callback on the wrong thread.. in your case, your callback is in the right thread, it's just that the audio buffers you are feeding it initially are either empty, too small or contains data not fit for audio playback
keep in mind that the purpose of the callback is to fire after each audio buffer supplied to the audio queue has been played (ie consumed).. the fact that after you start the queue the callback isn't being fired.. it means that there is nothing in the audio buffers for it to consume.. or too little meaningful information for it to consume..
when you do it manually you see a lag b/c the audio queue is trying to process the empty/erroneous buffers you supplied it.. then you resupply the same buffers with valid data that the queue eventually plays and then fires the callback
solution: compare the data you put in the buffers before starting the queue with the data you are supplying manually.. i'm sure there is a difference.. if that doesn't work please show your code for further analysis

Flex 4 > spark.components.VideoPlayer > How to switch bit rate?

The VideoPlayer (possibly VideoDisplay also) component is capable of somehow automatically picking the best quality video on the list it's given. An example is here:
http://help.adobe.com/en_US/FlashPlatform/beta/reference/actionscript/3/spark/components/mediaClasses/DynamicStreamingVideoItem.html#includeExamplesSummary
I cannot find the answers to below questions.
Assuming that the server that streams recorded videos is capable of switching across same videos with different bit rates and streaming them from any point within their timelines:
Is the bandwidth test/calculation within this component only done before the video starts playing, at which point it picks the best video source and never uses the other ones? Or, does it continuously or periodically execute its bandwidth tests and does it accordingly switch between video sources during the playback?
Does it support setting the video source through code and can its automatic switching between video sources be turned off (in case I want to provide this functionality to the user in the form of some button/dropdown or similar)? I know that the preferred video source can be set, but this only means that that video source will be tested/attempted first.
What other media servers can be used with this component, besides the one provided by Adobe, to achieve automated and manual switching between different quality of same video?
Obviously, I'd like to create a player that is smart enough to automatically switch between different quality of videos, and that will support manual instructions related to which source to play - both without interrupting the playback, or at least without restarting it (minor interruptions acceptable). Also, the playback needs to be able to start at any given point within the video, after enough data has been buffered (of course), but most importantly, I want to be able to start the playback beyond what's buffered. A note or two about fast-forwarding is not going to hurt, if anyone knows anything.
Thank you for your time.

Rate limiting a ruby file stream

I am working on a project which involves uploading flash video files to a S3 bucket from a number of geographically distributed nodes.
The video files are about 2-3mb each, and we are only sending one file (per node) every ten minutes, however the bandwidth we consume needs to be rate limited to ~20k/s, as these nodes are delivering streaming media to a CDN, and due to the locations we are only able to get 512k max upload.
I have been looking into the ASW-S3 gem and while it doesn't offer any kind of rate limiting I am aware that you can pass in a IO Stream. Given this I am wondering if it might be possible to create a rate-limited stream which overrides the read method, adds in the rate limiting logic (e.g. in its simplest form a call to sleep between reads) and then call out to the super of the overridden method.
Another option I considered is hacking the code for Net::HTTP and putting the rate limiting into the send_request_with_body_stream method which is using a while loop, but I'm not entirely sure which would be the best option.
I have attempted at extending the IO class, however that didn't work at all, simply inheriting from the class with class ThrottledIO < IO didn't do anything.
Any suggestions will be greatly appreciated.
You need to use Delegate if you want to "augment" an IO. This puts a "facade" around your IO object that will be used by all "external" readers of the object but will have no effect on the operation of the object itself.
I've extracted that into a gem since it proved to be generally useful
Here's an example for an IO that gets read from
http://rubygems.org/gems/progressive_io
Here there is an aspect added to all reading methods. I think you might be able to extend that to do basic throttling. After you are done you will be able to wrap your, say, File, into it:
throttled_file = ProgressiveIO.new(some_file) do | offset, size |
# compute rate and if needed sleep()
end
We've used the aiaio's active_resource_throttle to limit requests from pulling from the Harvest API on a project at work. I didn't set it up, but it works.

Resources