HLS.js, Prevent running stream in loop - hls.js

Api (https://57e037281fda7.streamlock.net/playback/chunklist_w1089149807.m3u8) returns below response :
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-TARGETDURATION:16
#EXT-X-MEDIA-SEQUENCE:166874933
#EXT-X-DISCONTINUITY-SEQUENCE:0
#EXTINF:8.112,
media_w1089149807_166874933.ts
#EXTINF:16.0,
media_w1089149807_166874934.ts
#EXTINF:8.0,
media_w1089149807_166874935.ts
with wowza player if player find similar response in sequential api calls it stops the player with error code >
https://docs.jwplayer.com/players/docs/jw8-player-errors-reference#230001
But when I am trying to play using https://hls-js.netlify.app/demo/ this page it play the stream and somehow start from start again.
Is there any config which will stop player if it find same response in multiple calls.

Related

video.insert failing silently with positive response and video id

I am testing my python video upload script since 2 days.
Yesterday everything ok. Upload successful; then quota limit reached.
Today continue to test: insert/upload succeeds with response containing a video id, but the video with that id never appears on the channel. Also not visible in youtube studio.
I tried with 2 different videos - same thing and I have a id for all of them.
Now quota is again reached
uploaded but not visible ids e.g.: pSqyId96gTk, -kw-yn-qAxI
If I upload the same video in the Youtube WEB Frontend, everything is ok.
Any Idea how to analyse that problem?
here is part of the response dict:
{'kind': 'youtube#video',
'etag': 'MgZ0r9Yu43ERF415Jw1lPRJgmDc',
'id': 'ZAuHewNcxL8',
'snippet':
{'publishedAt': '2021-12-22T12:40:54Z',
'channelId': 'UCNSTNQqqGxwBqoOdekZPj_A',
...
}
'status':
{
'uploadStatus': 'uploaded',
'privacyStatus': 'private',
'license': 'youtube',
'embeddable': True,
'publicStatsViewable': True}
}
privacyStatus: "public" has the same effect.

How to do syncronized playback using AVAudioPlayer API?

There is an example here on using AVAudioPlayer. In the description it says it's able to:
Play multiple sounds at the same time with optional synchronization.
I don't see how to do that in the example.
Apple API that says the same thing:
Play multiple sounds simultaneously by synchronizing the playback of multiple players
https://developer.apple.com/documentation/avfaudio/avaudioplayer?language=objc
Example:
https://github.com/xamarin/docs-archive/tree/master/Recipes/ios/media/sound/avaudioplayer
Note: The repository is archived and does not allow adding issues.
Use the playAtTime() method on all the sounds you want and pass in the same date to all the sounds to play at the same time.
I read about the playAtTime() method and thought it was "play at this position in time of the sound" BECAUSE IT SAYS IT SAYS THE PARAMETER IS NAMED TIME NOT DATE:
but it actually takes a Date and that means play at a future date and time.
So if you were only looking at the auto complete API and it says playAtTime(time) you don't get the details you do when looking at the documentation. Seeing that there is another property on sound player that is currentTime that is a number and not a date.
Documentation:
Plays audio asynchronously, starting at a specified point in the audio
output device’s timeline.
func startSynchronizedPlayback() {
// Create a time offset relative to the current device time.
let timeOffset = playerOne.deviceCurrentTime + 0.01
// Start playback of both players at the same time.
playerOne.play(atTime: timeOffset)
playerTwo.play(atTime: timeOffset)
}

ChromeCast - Stream Calls Failing when Stalled for long time

I'm attempting to play a live stream on ChromeCast. The stream is thrown fine and starts playback appropriately.
However when I play the stream longer: somewhere between 2-15 minutes, the player stops playing and I get MediaStatus.IDLE_REASON_ERROR in my RemoteMediaClient.Callback.
When looking at the console logs from ChromeCast I see that 3-4 calls are failed. Here are the logs:
14:50:26.931 GET https://... 0 ()
14:50:27.624 GET https://... 0 ()
14:50:28.201 GET https://... 0 ()
14:50:29.351 GET https://... 0 ()
14:50:29.947 media_player.js:64 [1381.837s] [cast.player.api.Host] error: cast.player.api.ErrorCode.NETWORK/3126000
Looking at Cast MediaPlayer.ErrorCode Error 312.* is
Failed to retrieve the media (bitrated) playlist m3u8 file with three retries.
Developers need to validate that their playlists are indeed available. It could be the case that a user that cannot reach the playlist as well.
I checked, the playlist was available. So I thought perhaps the server wasn't responding. So I looked at the network calls response logs.
Successful Request
Stalled Request
Note that the stall time far exceeds the usual stall time.
ChromeCast isn't making these calls at all, the requests are simply stalled for a long time until they are cancelled. All the successful requests are stalled for less than 14ms (mostly under 2ms).
The Network Analysis Timing Breakdown provides three reasons for stalling
There are higher priority requests.
There are already six TCP connections open for this origin, which is the limit. Applies to HTTP/1.0 and HTTP/1.1 only.
The browser is briefly allocating space in the disk cache
While I do believe the first one should not be the case, the later two can be. However in both cases I believe the fault lies with cast.player.
Am I doing something wrong?
Has anyone else faced the same issue? Is there any way to either fix it or come up with a work-around.

I'm recording speech using MediaRecorder API, why are chunks smaller than the actual size?

I have an issue with MediaRecorder API (https://www.w3.org/TR/mediastream-recording/#mediarecorder-api).
I'm using it to record the speech from the web page using Chrome, and save it as chunks. I need to be able to play it while and after it is recorded, so it's important to keep those chunks.
Here is the code which is recording data:
navigator.mediaDevices.getUserMedia({ audio: true, video: false }).then(function(stream) {
recorder = new MediaRecorder(stream, { mimeType: 'audio/webm; codecs="opus"' })
var previous_timecode = null
recorder.ondataavailable = function(e) {
duration = previous_timecode ? e.timecode - previous_timecode : null
previous_timecode = e.timecode
// Read blob from `e.data`, decode64 and send to sever;
// Additionally send the duration calculated from the events;
// Duration of first chunk is calculated in a different way.
}
recorder.start(1000)
})
The issue actually happened only once, but still it is quite a scary one. The problem was that during 7 minutes of recording i got only 5 minutes of audio. Analyzing the chunks gave me the following input - at some point, the chunks became much smaller then they were expected to be - the data was emitted every second, but the duration of the chunks was arround 400-700ms.
The audio was correct, it did not have any gaps, it just came with a growing delay. At some points the duration of chunk was growing a bit - up 4.8sec in a chunk, but still the total delay grew up to ~2 min.
In the CSV attached https://transfer.sh/stgnW/1.csv you can see the durations of each chunk calculated with ffmpeg (size audio file containing first n chunks minus the size of file containing first n-1 chunks) and also the durations calculated by e.timecode values.
It looks like some throttling issue - is there something like that in chrome? How could i fix my code to make sure it's not throttled that way?
It is a longshot but try removing any browser level processing that might be done on audio with these constraints:
echoCancellation:false
autoGainControl:false
noiseSuppression:false

Firefox ignoring response header content-range and playing only the sample sent

I have built an audio stream for mp3 files, and each time client hits the audio it receives something like this:
But what it does is just plays 1 minute sample instead of 120 minute
What am I doing wrong here?
Not 100% sure because you didn't provide code or an example stream to test, but your handling of HTTP range requests is broken.
In your example request, the client sends Range: bytes=0-, and your server responds with a 1MiB response:
Content-Length: 1048576 (aka. 1 MiB)
Content-Range: 0-1048575/...
This is wrong, the client did not request this! It did request bytes=0-, meaning all data from position 0 to the end of the entire stream (See the http 1.1 RFC), i.e. a response equal to one without any Range. (IIRC, Firefox still sends the Range: bytes=0- to detect if the Server handles ranges in the first place).
This, combined with the Content-Length, leads the client (Firefox) to think the whole resource is just 1MiB in size, instead of the real size. I'd imagine the first 1 MiB of your test stream comes out as 1:06 of audio.
PS: The Content-Duration header (aka. RFC 3803) is something browsers don't usually implement at all and just ignore.
Just an idea. Did you tried some of the http 3xx header like:
'308 Resume Incomplete' or '503 Service Temporarily Unavailable' plus 'retry-after:2' or '413 Request Entity Too Large' plus 'retry-after:2'

Resources