AVPlayer seekToTime - macos

I am using a periodicTimeObserverForInterval to update a plot to reflect my current audio waveform. This works great and has been giving me no trouble. It stays in sync with pausing and starting. However, when I tried to introduce arbitrary seeking into my code, the waveform visual becomes out of sync with the playing audio. This doesn't make sense to me unless the time sent to my observer is wrong after calling the seek function.
// timeScale = 22050 = sampleRate
let timeScale = self.player.currentItem.asset.duration.timescale;
self.player.seekToTime(CMTimeMakeWithSeconds(Float64(time), timeScale), toleranceBefore: kCMTimeZero, toleranceAfter: kCMTimeZero, completionHandler: {
(Bool) -> Void in
self.player.play();
});
I can't figure out why this function is causing the visual to get out of sync.

Related

How do I buffer and capture an RTSP stream to disk based on a trigger?

I think what I'm asking about is similar to this ffmpeg post about how to capture a lightning strike (https://trac.ffmpeg.org/wiki/Capture/Lightning).
I have a Raspberry Pi with an IP cam over RTSP, and what I'm wondering is how to maintain a continual 5 second live video buffer, until I trigger a "save" command which will pipe that 5 second buffer to disk, and continue streaming the live video to disk until I turn it off.
Essentially, Pi boots up, this magic black box process starts and is saving live video into a fixed-size, 5-second buffer, and then let's say an hour later - I click a button, and it flushes that 5-second buffer to a file on disk and continues to pipe the video to disk, until I click cancel.
In my environment, I'm able to use ffmpeg, gstreamer, or openRTSP. For each of these, I can connect to my RTSP stream and save it to disk, but I'm not sure how to create this ever-present 5 second cache.
I feel like the gstreamer docs are alluding to it here (https://gstreamer.freedesktop.org/documentation/application-development/advanced/buffering.html?gi-language=c), but I guess I'm just not grokking how the buffering fits in with a triggered save. From that article, I get the impression that the end-time of the video is known in advance (I could artificially limit mine, I guess).
I'm not in a great position to post-process the file, so using something like openRTSP, saving a whole bunch of video segments, and then merging them isn't really an option.
Note: After a successful save, I wouldn't need to save another video for a minute or so, so that 5 second cache has plenty of time to fill back up before the next
This is the closest similar question that I've found: https://video.stackexchange.com/questions/18514/ffmpeg-buffered-recording
Hey,
I dont know if you have knowledge about python, but there is a libary called pyav thats a fancy python wrapper/interface for ffmpeg.
There u can just read your frames from an RTSP Source and handle that frames as you want.
Here is just an idea/hack implementaion about that what u describe, you need to design your framebuffer. When u know that u get 25 FPS from your camera than you can restrict the queue size to 125.
import av
import time
import queue
from threading import Thread, Event
class LightingRecorder(Thread):
def __init__(self, source: str = ""):
Thread.__init__(self)
self.source = source
self.av_instance = None
self.connected = False
self.frame_buffer = queue.Queue()
self.record_event = Event()
def open_rtsp_stream(self):
try:
self.av_instance = av.open(self.source, 'r')
self.connected = True
print ("Connected")
except av.error.HTTPUnauthorizedError:
print ("aHTTPUnauthorizedError")
except Exception as Error:
# Catch other pyav errors if you want, just for example
print (Error)
def run(self):
self.open_rtsp_stream()
while 1:
if self.connected:
for packet in self.av_instance.demux():
for frame in packet.decode():
if packet.stream.type == 'video':
# TODO:
# Handle clearing of Framebuffer, remove frames that are older as a specific timestamp
# Or calculate FPS per seconds and store that many frames on framebuffer
print ("Add Frame to framebuffer", frame)
self.frame_buffer.put(frame)
if self.record_event.is_set():
[frame.to_image().save('frame-%04d.jpg' % frame.index) for frame in self.frame_buffer]
else:
time.sleep(10)
LightingRecorder(source='rtsp://root:pass#192.168.1.197/axis-media/media.amp').start()
iSpy/AgentDVR can do exactly what you want https://www.ispyconnect.com/userguide-recording.aspx:
Buffer: This is the number of seconds of video to buffer in memory.
This feature enables iSpy to capture the full event that causes the
motion detection event.
Edit:
iSpy runs only on Windows unlike AgentDVR which also has versions for Linux/OSX/RPi.

Cache for hls video using exoplayer in android

Is it possible to create a cache for hls type videos when playing in exoplayer , so that once video completely streamed then no need to load again, and start playing immediately for the next time when play button clicked, If possible please provide any solution? The video format is .m3u8 type.
For non ABR streams, i.e. not HLS or DASH etc
There is a well used library which provides video caching functionality:
https://github.com/danikula/AndroidVideoCache
Bear in mind that large videos will use a lot of memory so you may want to consider when and where you want to cache.
Update for ABR streams
Adaptive Bit Rate Streaming protocols like HLS or DASH typically have segmented multiple bit rate versions of the video, and the player will download the video segment by segment, choosing the bitrate for the next segment depending on the network conditions and the device capabilities.
For this reason, simply storing what you are viewing may not give you the result you want - for example if you have some network congestion part way through the video you may receive lower quality segments, which you probably don't want for a video you will watch multiple times.
You can download or play a video, forcing the stream to always select from one specific resolution by using a track selector. ExoPlayer documentation includes some info here:
https://exoplayer.dev/downloading-media.html
In an older blog post (2 years old but the DownloadHelper part is still relevant, I think), Google provide info on how to use the DownloadHelper - https://medium.com/google-exoplayer/downloading-adaptive-streams-37191f9776e.
This includes the example:
// Replace with HlsDownloaderHelper or SsDownloadHelper if the stream is HLS or SmoothStreaming
DownloadHelper downloadHelper =
new DashDownloadHelper(manifestUri, manifestDataSourceFactory);
downloadHelper.prepare(
new Callback() {
#Override
public void onPrepared(DownloadHelper helper) {
// Preparation completes. Now other DownloadHelper methods can be called.
List<TrackKey> trackKeys = new ArrayList<>();
for (int i = 0; i < downloadHelper.getPeriodCount(); i++) {
TrackGroupArray trackGroups = downloadHelper.getTrackGroups(i);
for (int j = 0; j < trackGroups.length; j++) {
TrackGroup trackGroup = trackGroups.get(j);
for (int k = 0; k < trackGroup.length; k++) {
Format track = trackGroup.getFormat(k);
if (shouldDownload(track)) {
trackKeys.add(new TrackKey(i, j, k));
}
}
}
}
DownloadAction downloadAction = downloadHelper.getDownloadAction(null, trackKeys);
DownloadService
.startWithAction(context, MyDownloadService.class, downloadAction, true);
}
private boolean shouldDownload(Format track) {...}
#Override
public void onPrepareError(DownloadHelper helper, IOException e) {...}
});
The code above look at the manifest file - this is the index file for DASH or HLS which lists the individual tracks and provides info, e.g. URLs, of there to find them.
It loops through each track that it finds and calls a function which you can define as you want to decide whether to include or exclude this track from the download.
To use track selection when playing back a streamed video, you can control this programatically using the DefaultTrackSelector: https://exoplayer.dev/track-selection.html. This link includes an example to select SD video and German Audio track:
trackSelector.setParameters(
trackSelector
.buildUponParameters()
.setMaxVideoSizeSd()
.setPreferredAudioLanguage("deu"));
The standard player also allows a user select the track from the controls if you are displaying controls - the ExoPlayer demo app () includes this functionality and the user view should look something like:
One note - ABR streaming is quite complex and requires extra processing and storage on the server side. If you expect to only use one quality level then it may make more sense to simply stream or download the videos as mp4 etc.

Win32: Capturing desktop screenshots asynchronously

I am using a pretty common way to capture screenshots using Direct3D and encode them in an .mp4 video.
while (true)
{
HRESULT hr = pDevice->GetFrontBufferData(0, pSurface);
//code to create and send the sample to a IMFSinkWriter
}
It works despite being painfully slow (it's only good for 15 fps or so) but there's a major problem: I have to manually calculate each sample's timestamp and video length ends up as being incorrect and dependent on CPU speed.
Is there anyway to capture screenshot using a callback after a fixed interval of time (say 30 fps), without having to use the infinite loop?

ffmpeg av_seek_frame with AVSEEK_FLAG_ANY causes grey screen

Problem:
omxplayer's source code calls the ffmpeg av_seek_frame() method using the AVSEEK_FLAG_BACKWARD flag. Although not 100% sure, I believe this seeks to the closest i-frame. Instead, I want to seek to exact locations, so I modified the source code such that the av_seek_frame() method now uses the AVSEEK_FLAG_ANY flag. Now, when the movie loads, I get a grey screen, generally for 1 second, during which I can hear the audio. I have tried this on multiple computers (I am actually synchronizing them, therefore, at the same time too) so it is not a n isolated incident. My guess is that seeking to non i-frames is computationally more expensive, resulting in the initial grey screen.
Question: How, using ffmpeg, can I instruct the audio to wait until the video is ready before proceeding.
Actually, AVSEEK_FLAG_BACKWARD indicates that you want to find closest keyframe having a smaller timestamp than the one you are seeking.
By using AVSEEK_FLAG_ANY, you get the frame that corresponds exactly to the timestamp you asked for. But this frame might not be a keyframe, which means that it cannot be fully decoded. That explains your "grey screen", that appears until the next keyframe is reached.
The solution would therefore be to seek backward using AVSEEK_FLAG_BACKWARD and, from this keyframe, read the next frames (e.g. using av_read_frame()) until you get to the one corresponding to your timestamp. At this point, your frame would be fully decoded, and would not appear as a "grey screen" anymore.
NOTE: It appears that, for some reason, av_seek_frame() using AVSEEK_FLAG_BACKWARD returns the next keyframe when the frame that I am seeking is the one directly before this keyframe. Otherwise it returns the previous keyframe (which is what I want). My solution is to change the timestamp I give to av_seek_frame() to ensure that it will return the keyframe before the frame I am seeking.
Completing JonesV answer with some code:
void seekFrame(unsigned frameIndex)
{
// Seek is done on packet dts
int64_t target_dts_usecs = (int64_t)round(frameIndex
* (double)m_video_stream->r_frame_rate.den
/ m_video_stream->r_frame_rate.num * AV_TIME_BASE);
// Remove first dts: when non zero seek should be more accurate
auto first_dts_usecs = (int64_t)round(m_video_stream->first_dts
* (double)m_video_stream->time_base.num
/ m_video_stream->time_base.den * AV_TIME_BASE);
target_dts_usecs += first_dts_usecs;
int rv = av_seek_frame(
m_format_ctx, -1, target_dts_usecs, AVSEEK_FLAG_BACKWARD);
if (rv < 0)
throw exception("Failed to seek");
avcodec_flush_buffers(m_codec_ctx);
}
Then you can begin decoding checking AVPacket.dts against original target dts, computed on AVStream.time_base. As soon as you reached the target dts, the next decoded frame should be the desired frame.

Windows Phone Media Stream Source Seek Implementation C#

I am implementing a media streaming source in c# windows phone 8 for streaming schoutcast urls.
I can play the buffered stream from the URLs. Now I have to implement seeking over the buffered audio data.
I tried setting forward and backward by 1 seconds from the GUI. Below is the code for rewinding
if(BackgroundAudioPlayer.Instance.CanSeek)
{
TimeSpan position = BackgroundAudioPlayer.Instance.Position;
BackgroundAudioPlayer.Instance.Position = new TimeSpan(0, 0, (int)(position.Seconds - 1));
}
But the player stops for long time and starts playing.
I think I have to implement the following method found in Media Stream Source implementation.
protected override void SeekAsync(long seekToTime)
{
ReportSeekCompleted(seekToTime);
}
I would like to know how to implement forward and backward seeking using Media Streaming Source without the delay?

Resources