Firefox and Windows Media Player: small green horizontal line after a while - firefox

we use the Windows Media Player plugin to play some video pages on firefox 20. This works great, but after a while (this differs, but often after round about 30 minutes) the videos will be not longer shown and instead of the video only a small horiziontal green line in the middle of the video area is shown (on black background). The video files are ok because the same files will be played successfull before the green line is shown. And after restarting firefox all is still fine until after several minutes the green line is shown again ...
I have already installed the latest graphic driver. Also dis-/enabling Video Acceleration in Windows Media Player don't helps.
Here our environment:
Win7 64b
WMP 12.0.7601.17514
Firefox 20.0.1
Windows Media Player Firefox Plugin 1.0.0.8
EDIT:
I can see that the Media Player creates for each played video a own playlist. In the status bar of the WMP plugin stands "Wiedergabeliste1", Wiedergabeliste2", "Wiedergabeliste3", ... ("Wiedergabeliste" means "playlist") for the first, second, third, ... video. But with the first video which is displayed as a green line on the status bar stands only the name of the video file (without ".wmv").
Also if the videos are showing correct the "Statistic" (right mouse click into the WMP plugin) shows me valid values for frame rate and so on. But if only the green line was shown the frame rate and some other values are "0".
I have also tested this with Firefox 22 with the same effect (green line after some minutes). On IE 8 the effect is a little bit different. After some minutes I get then green line too, but only for each second video (one video is ok, next video a green line, next video ok again, next video a green line , ...)
Thanks and regards,
Steffen
PS: This is also posted on: http://answers.microsoft.com/en-us/windows/forum/windows_7-pictures/windows-media-player-small-green-horizontal-line/c9ea0621-a678-4017-9aac-55a1239db632

Seems this problem is related to the Hardware. We have found out that the green line occurres only on one Hardware and on at least three other Hardware machines not. But it seems it's not related to the graphic card (Intel/ATI HD4000) because we have not seen this problem on a other machine with the same CPU and graphic card ...

Related

How to get image frames from video in Flutter in Windows?

In my flutter desktop application I am playing video using dart_vlc package. I want to take image frames (like screenshot) from video file of different video positions. At first I tried with screenshot package where I found two problems.
It takes screenshot of the NativeVideo (A widget for showing video) widget only, but not includes the image frame inside the widget.
And video control buttons (play, pause, volume) comes up in the screenshot which I don't want.
There's some other packages for exporting image frames from video such as export_video_frame or video_thumbnail but both of them are not supported in Windows. I want to do the same thing they do in Windows.
So how to get image frames from a video file playing in a video player in Windows?
dart_vlc package itself provides a function to take snapshots from a video file currently paying in the video player. The general structure is
videoPlayer.takeSnapshot(file, width, height)
Eaxmple:
videoPlayer.takeSnapshot(File('C:/snapshots/snapshot1.png'), 600, 400)
This function call captures the video player's current position as an image and saves it in the mentioned file.

ffmpeg: transmission problems / artifacts in rtsp screen grab - might be a WiFi problem

In short: Is there a way to "force" ffmpeg to not save a grabbed frame if there are transmission problems? Or any other software that does the same and I just don't know of?
Long story:
Updating my house surveillance from almost 10 years old DCS-932L cameras to Tapo C100 Cameras, I changed the image delivery method from ftp push to rtsp grab via ffmpeg.
I had written a program in C++ to check for "bad" pictures from the old cameras, where parts of the picture tended to be simply black once every minute or so (I'm grabbing a pic every 2 seconds). The Tapo C100 doesn't feature ftp-push, thus I tried (after a few days trying)
ffmpeg.exe -y -i rtsp://user:pass#10.0.0.%ld:554/stream1 -vframes 1 %scamera\rtsp.jpg -loglevel quiet
This works absolutely perfect in my main house, which features a Fritz!Box 7590 and a set of Fritz!Powerline (510/and two 540e) repeaters, plus one WiFi repeater Fritz 600) as my phone line and the router are in the basement.
In my holiday home, though, it doesn't. The Wifi is managed by a Hybrid DSL/5G - box I have no alternative to, which is a Huawei DN9245W and works as DHCP Server, because this is almost impossible to change. Everything "real" is managed by another Fritz!Box 7590, connected via ethernet, and another set of Fritz!Powerline 510 and two 540e repeaters plus half a dozen Wifi Repeaters, mostly Fritz! 310, 450E and 600. The house was partially built with local stones, which are very iron-y, and there's a lot of metallized glass. Full set is show in Image
Now, this does produce different artifacts, about two per minute or in every 15th picture, see
Image with artifacts No. 1
Thinking this might be a transmission problem, I tried forcing the streamgrab via TCP, because while rtsp doesn't have error correction, TCP does:
ffmpeg.exe -rtsp_transport tcp -i
rtsp://user:pass#10.0.0.%ld:554/stream1 -y -f image2 -update 1 -r
1 -vframes 1 -qscale:v 30 %scamera\rtsp.jpg -loglevel quiet
Which didn't change the artifacts much, see Image with artifacts No. 2
The house now has a total of 12 Cameras, six of which are each "managed" by an older Dell Optiplex Desktop bought used off ebay with an i3 or i5 processor from about 2015, which goes to about 65% load. My software will check if the grabbed picture is finished saving (to RAMdisk), rename it, check if there are artifacts, if so, drop it, if not, convert to bitmap and then compare it to previous image, guess if there's a change, mark that change with a rhombus and rate it, save that as a jpeg file, and then some other stuff that's not relevant here. See: Image of my program running with six cameras
I did try grabbing keyframes only, but a bunny or deer or burglar hopping through my property doesn't produce a keyframe, so that turned out to be missing the point.
I'm out of ideas here. It does work flawlessly in the main house. It doesn't in the holiday house. I can hardly install more repeaters; I already tried mesh and not-mesh, and the problem isn't exactly wifi overload, because even with just one camera running, it still persists. In certain places. Some have no problems. Reasons? No clue. I really hope someone has a good idea.
I got a couple of these cameras and working on what I'd call a poor quality wifi network I had similar problems until I switched the camera via the app to 1080P mode. After that the frames seem to fine.
My camera defaulted to 720P mode plus the always available? 320P stream2.
My $0.02. Thanks for your post BTW.

How to add colorProfile with AVAssetWriter to video recorded from screen using CGDisplayStream

I've written a screen-recording app that writes out H.264 movie files using VideoToolbox and AVWriter. The colors in the recorded files are a bit dull compared to the original screen. I know that this if because the colorProfile is not stored in the video-file.
This is closely related to How to color manage AVAssetWriter output
I have created a testbed to show this on github ScreenRecordTest
If you run this app you can start recording using CMD-R and stop it using the same CMD-R (You have to start and stop recording once to get a fully written movie-file). You will find the recording in your /tmp/ folder under a name similar like this: "/tmp/grab-2018-10-25 09:23:32 +0000.mov"
When recording the app shows two live images: a) the frame gotten from CGDisplayStream -and- b) the cmSampleBuffer that came out of the compressor.
What I found out is that the IOSurface that is returned from CGDisplayStream is not color-managed, so you'll notice the "dull" colors already before compression. If you un-comment line 89 in AppDelegate.swift
// cgImage = cgImage.copy(colorSpace: screenColorSpace)!
this live-preview will have the correct colors. Now this is only for displaying the IOSurface before compression. I have no idea how to make the other live-preview (after compression) (line 69 in AppDelegate) show correct colors (say: how to apply a colorProfile to a CMSampleBuffer) or most important how to tag the written video-file with the right profile so that when opening the .mov file I get the correct colors on playback.

Animated GIF to video with ffmpeg - wrong timing

I'm trying to convert an animated GIF to video with ffmpeg, but there's a strange problem: the time delays of each frame seem to be off by one frame.
For example, if the frame #1 is supposed to be shown for 2000 ms and the frames from #2 to #10 are supposed to be shown for 100 ms each, in the resulting video it immediately skips to the frame #2 which is shown for 2000 ms instead :P
Is this some kind of a bug? Or am I doing something wrong?
Here's my command line:
ffmpeg –i Mnozenie_anim_deop.gif Mnozenie_anim.mp4
(Aside: why doesn't the "-" show up in code blocks unless I replace it with "–"?)
so nothing extraordinary, just the defaults. (Unless this is the root of the problem? Maybe my defaults are bad, and I need to specify some magic options?)
This problem seems to appear for any video formats except MKV, and when I play these files in mplayer, they all behave that way except MKV.
But when I open them in kdenlive (a non-linear video editing program), the problem appears in all of them, including MKV (which is strange, because it plays back just fine in mplayer :q ).
I tried converting the same exact file with this online converter here:
https://ezgif.com/gif-to-mp4
and there is no problem with its output – it plays back fine both in mplayer and when imported to kdenlive, so I guess they must have been using some magic command line options that I'm missing.
Any ideas what can be wrong and how to track down the culprit?
Edit: Here's a sample animated GIF file I'm trying to convert:
http://nauka.mistu.info/Matematyka/Algebra/Szeregi/Mnozenie_anim.gif
and the MP4 file that I generated from it which demonstrates this problem:
http://sasq.comyr.com/Stuff/Mnozenie_anim.mp4
As you can see, the fade in starts prematurely but stops for a couple of seconds instead of waiting for a couple of seconds BEFORE the fade in begins.

How do I set the first and last frame of a video to be an image?

HTML 5 implementations are different across various browsers. In firefox, the image specified by the placeholder attribute will be shown until the user clicks play on the video. In chrome, the placeholder image is shown until the video is loaded (not played), at which point the first frame of the video is shown.
To reconcile this issue, I would like to set the first frame of the video to the placeholder image so that the experience will be the same in both browsers.
I would preferably do this using ffmpeg or mencoder. I have very limited experience using these however, so if someone could point me in the right direction, I would be much obliged.
Thanks!

Resources