Setting quality levels to Plyr player with HLS from .m3u8 file - plyr

I am using Plyr player with HLS implementation in an angular component, I did all the settings mentioned in the below post
Adding Quality Selector to plyr when using HLS Stream
Still, I can't get the quality setting option in UI. Upon clicking gear icon, only speed options with 8 options showing(.75 to 4 ). For the .m3u8 file, even if I set the quality level to player quality option, it won't be showing in UI.
In the console, from player.config.speed attribute i can find the speed options showing in the player UI, but the player.config.quality is showing some in build value like { default: 576, options: [4320, 2880, 2160, 1440, 1080, 720, 576, 480, 360, 240] } same as plyr README.md file
I tried to set the quality level from Hls.Event.Manifest.Parsed event.
Does anybody know how to set quality levels from .m3u8 file to Plyr player? Please help

Related

How to get image frames from video in Flutter in Windows?

In my flutter desktop application I am playing video using dart_vlc package. I want to take image frames (like screenshot) from video file of different video positions. At first I tried with screenshot package where I found two problems.
It takes screenshot of the NativeVideo (A widget for showing video) widget only, but not includes the image frame inside the widget.
And video control buttons (play, pause, volume) comes up in the screenshot which I don't want.
There's some other packages for exporting image frames from video such as export_video_frame or video_thumbnail but both of them are not supported in Windows. I want to do the same thing they do in Windows.
So how to get image frames from a video file playing in a video player in Windows?
dart_vlc package itself provides a function to take snapshots from a video file currently paying in the video player. The general structure is
videoPlayer.takeSnapshot(file, width, height)
Eaxmple:
videoPlayer.takeSnapshot(File('C:/snapshots/snapshot1.png'), 600, 400)
This function call captures the video player's current position as an image and saves it in the mentioned file.

quickest way to add image watermark on video in andorid?

I have use ffmpeg and mp4parser to add image watermark on video.
both works when video size is small like less than 5MB to 7Mb but
when it comes to large video size(anything above than 7MB or so..)
it fails and it doesn't not work.
what are the resources that helps to adding watermark on video quickly. if you have any useful resources that please let me know?
It depends on what exactly you need.
If the watermark is just needed when the video is viewed on the android device, the easiest and quickest way is to overlay the image with a transparent background over the video view. You will need to think about fullscreen vs inline and portrait vs landscape to ensure it lines up as you want.
If you want to watermark the video itself, so that the watermark is included if the video is copied or sent elsewhere, then ffmpeg is likely as fast as other solutions on the device itself. If you are able to send the video to a server and have the watermark applied there you will have the ability to use much more powerful compute resource.

Video was encoded with a new width + height along with the old one. Can I re-encode with just the old dimensions using ffmpeg?

I've got a video out of OBS that play's normally on my system if I open it with VLC for example, but when I import it into my editor (Adobe Premiere) it gets weirdly cropped down. When inspecting the data for the video it's because for some reason the video gets encoded with a new width and height over top of the old one! Is there a way using ffmpeg to re-encode/transcode the video to a new file with only the original width and height?
Bonus question: would there be a way for me to extract the audio channels from my video as separate .mp3s? There are 4 audio channels on the video
Every time you reencode a video you will lose quality. Scaling the video up will not reintroduce details that were lost when it was scaled down.

How to add colorProfile with AVAssetWriter to video recorded from screen using CGDisplayStream

I've written a screen-recording app that writes out H.264 movie files using VideoToolbox and AVWriter. The colors in the recorded files are a bit dull compared to the original screen. I know that this if because the colorProfile is not stored in the video-file.
This is closely related to How to color manage AVAssetWriter output
I have created a testbed to show this on github ScreenRecordTest
If you run this app you can start recording using CMD-R and stop it using the same CMD-R (You have to start and stop recording once to get a fully written movie-file). You will find the recording in your /tmp/ folder under a name similar like this: "/tmp/grab-2018-10-25 09:23:32 +0000.mov"
When recording the app shows two live images: a) the frame gotten from CGDisplayStream -and- b) the cmSampleBuffer that came out of the compressor.
What I found out is that the IOSurface that is returned from CGDisplayStream is not color-managed, so you'll notice the "dull" colors already before compression. If you un-comment line 89 in AppDelegate.swift
// cgImage = cgImage.copy(colorSpace: screenColorSpace)!
this live-preview will have the correct colors. Now this is only for displaying the IOSurface before compression. I have no idea how to make the other live-preview (after compression) (line 69 in AppDelegate) show correct colors (say: how to apply a colorProfile to a CMSampleBuffer) or most important how to tag the written video-file with the right profile so that when opening the .mov file I get the correct colors on playback.

How do I set the first and last frame of a video to be an image?

HTML 5 implementations are different across various browsers. In firefox, the image specified by the placeholder attribute will be shown until the user clicks play on the video. In chrome, the placeholder image is shown until the video is loaded (not played), at which point the first frame of the video is shown.
To reconcile this issue, I would like to set the first frame of the video to the placeholder image so that the experience will be the same in both browsers.
I would preferably do this using ffmpeg or mencoder. I have very limited experience using these however, so if someone could point me in the right direction, I would be much obliged.
Thanks!

Resources