phantomjs screenshots and ffmpeg frame missing - ffmpeg

I have problem making video from website screenshots taken from phantomjs.
the phantomjs did not make screenshots for all frames within the same second and even not all seconds there , there is huge missing frames .
the result is high speed video playing with many jumps in video effects .
test.js :
var page = require('webpage').create(),
address = 'http://raphaeljs.com/polar-clock.html',
duration = 5, // duration of the video, in seconds
framerate = 24, // number of frames per second. 24 is a good value.
counter = 0,
width = 1024,
height = 786;
frame = 10001;
page.viewportSize = { width: width, height: height };
page.open(address, function(status) {
if (status !== 'success') {
console.log('Unable to load the address!');
phantom.exit(1);
} else {
window.setTimeout(function () {
page.clipRect = { top: 0, left: 0, width: width, height: height };
window.setInterval(function () {
counter++;
page.render('newtest/image'+(frame++)+'.png', { format: 'png' });
if (counter > duration * framerate) {
phantom.exit();
}
}, 1/framerate);
}, 200);
}
});
this will create 120 image , this is correct count , but when you see the images one by one you will see many duplicate the same contents and many missing frames
ffmpeg :
fmpeg -start_number 10001 -i newtest/image%05d.png -c:v libx264 -r 24 -pix_fmt yuv420p out.mp4
I know this script and ffmpeg command not perfect , because I did hundred of changes without lucky, and I lost the correct setting understanding .
an anyone guide me to fix this ?.
thank you all

Related

How to add "dmb1" four char code in ffmpeg?

I'm trying to stream a video from webcam to the local computer. The stream has resolution of 3840x2160 and 30fps. Computer I'm using is Mac Pro. However when I run it with next command:
ffmpeg -f avfoundation -framerate 30 -video_size 3840x2160 -pix_fmt nv12 -probesize "50M" -i "0" -pix_fmt nv12 -preset ultrafast -vcodec libx264 -tune zerolatency -f mpegts udp://192.168.1.5:5100/mystream
it has a latency of 3-4 seconds. This problem is not present in Chromium, when using MediaStream API stream is displayed in realtime.
I believe that's because Chromium has "dmb1" four char code supported:
+ (media::VideoPixelFormat)FourCCToChromiumPixelFormat:(FourCharCode)code {
switch (code) {
case kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange:
return media::PIXEL_FORMAT_NV12; // Mac fourcc: "420v".
case kCVPixelFormatType_422YpCbCr8:
return media::PIXEL_FORMAT_UYVY; // Mac fourcc: "2vuy".
case kCMPixelFormat_422YpCbCr8_yuvs:
return media::PIXEL_FORMAT_YUY2;
case kCMVideoCodecType_JPEG_OpenDML:
return media::PIXEL_FORMAT_MJPEG; // Mac fourcc: "dmb1".
default:
return media::PIXEL_FORMAT_UNKNOWN;
}
}
To set pixel format Chromium is using next piece of code:
NSDictionary* videoSettingsDictionary = #{
(id)kCVPixelBufferWidthKey : #(width),
(id)kCVPixelBufferHeightKey : #(height),
(id)kCVPixelBufferPixelFormatTypeKey : #(best_fourcc),
AVVideoScalingModeKey : AVVideoScalingModeResizeAspectFill
};
[_captureVideoDataOutput setVideoSettings:videoSettingsDictionary];
I tried doing the same thing in ffmpeg by changing avfoundation.m file. First I added new pixel format AV_PIX_FMT_MJPEG:
static const struct AVFPixelFormatSpec avf_pixel_formats[] = {
{ AV_PIX_FMT_MONOBLACK, kCVPixelFormatType_1Monochrome },
{ AV_PIX_FMT_RGB555BE, kCVPixelFormatType_16BE555 },
{ AV_PIX_FMT_RGB555LE, kCVPixelFormatType_16LE555 },
{ AV_PIX_FMT_RGB565BE, kCVPixelFormatType_16BE565 },
{ AV_PIX_FMT_RGB565LE, kCVPixelFormatType_16LE565 },
{ AV_PIX_FMT_RGB24, kCVPixelFormatType_24RGB },
{ AV_PIX_FMT_BGR24, kCVPixelFormatType_24BGR },
{ AV_PIX_FMT_0RGB, kCVPixelFormatType_32ARGB },
{ AV_PIX_FMT_BGR0, kCVPixelFormatType_32BGRA },
{ AV_PIX_FMT_0BGR, kCVPixelFormatType_32ABGR },
{ AV_PIX_FMT_RGB0, kCVPixelFormatType_32RGBA },
{ AV_PIX_FMT_BGR48BE, kCVPixelFormatType_48RGB },
{ AV_PIX_FMT_UYVY422, kCVPixelFormatType_422YpCbCr8 },
{ AV_PIX_FMT_YUVA444P, kCVPixelFormatType_4444YpCbCrA8R },
{ AV_PIX_FMT_YUVA444P16LE, kCVPixelFormatType_4444AYpCbCr16 },
{ AV_PIX_FMT_YUV444P, kCVPixelFormatType_444YpCbCr8 },
{ AV_PIX_FMT_YUV422P16, kCVPixelFormatType_422YpCbCr16 },
{ AV_PIX_FMT_YUV422P10, kCVPixelFormatType_422YpCbCr10 },
{ AV_PIX_FMT_YUV444P10, kCVPixelFormatType_444YpCbCr10 },
{ AV_PIX_FMT_YUV420P, kCVPixelFormatType_420YpCbCr8Planar },
{ AV_PIX_FMT_NV12, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange },
{ AV_PIX_FMT_YUYV422, kCVPixelFormatType_422YpCbCr8_yuvs },
{ AV_PIX_FMT_MJPEG, kCMVideoCodecType_JPEG_OpenDML }, //dmb1
#if !TARGET_OS_IPHONE && __MAC_OS_X_VERSION_MIN_REQUIRED >= 1080
{ AV_PIX_FMT_GRAY8, kCVPixelFormatType_OneComponent8 },
#endif
{ AV_PIX_FMT_NONE, 0 }
};
After that I tried to hardcode it:
pxl_fmt_spec = avf_pixel_formats[22];
ctx->pixel_format = pxl_fmt_spec.ff_id;
pixel_format = [NSNumber numberWithUnsignedInt:pxl_fmt_spec.avf_id];
capture_dict = [NSDictionary dictionaryWithObject:pixel_format
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
[ctx->video_output setVideoSettings:capture_dict];
Code compiles and builds successfully, but when I run it with above command, without -pix_fmt specified, program enters infinite loop in get_video_config function:
while (ctx->frames_captured < 1) {
CFRunLoopRunInMode(kCFRunLoopDefaultMode, 0.1, YES);
}
It looks obvious that ffmpeg is not able to load first frame. My camera is more than capable of supporting this pixel and stream formats. I proved it with this piece of code which comes after ffmpeg selects which format to use for specified width, height and fps:
FourCharCode fcc = CMFormatDescriptionGetMediaSubType([selected_format formatDescription]);
char fcc_string[5] = { 0, 0, 0, 0, '\0'};
fcc_string[0] = (char) (fcc >> 24);
fcc_string[1] = (char) (fcc >> 16);
fcc_string[2] = (char) (fcc >> 8);
fcc_string[3] = (char) fcc;
av_log(s, AV_LOG_ERROR, "Selected format: %s\n", fcc_string);
Above code prints "Selected format: dmb1".
Can someone tell me why ffmpeg can't load first frame and how to add new pixel format in this library?
Also, any suggestion on how to resolve input latency of 3 seconds in some other way is more than welcome.
EDIT:
If you try setting any other pixel format in Chromium other then MJPEG there is a latency of 2 seconds. When I say "setting" I mean changing Chromium source code and recompiling it. I am pretty sure that the problem is in pixel format, because camera is sending dmb1 and ffmpeg doesn't know about that format.
Also latency is only present on MacOS.

Capture frames from a canvas at 60 fps

Hey everyone so I have a canvas that I write a rather complex animation to. Let's say I want to take screenshots of the canvas at 60 frames a second. The canvas doesn't have to play in real-time I just need it to capture 60 frames a second so I can send the screenshots to FFmpeg and make a video. I know I can use canvas.toDataURL but how do I capture the frames smoothly?
Use this code to pause the video and lottie animations if you are using lottie-web for after effects content in the browser. Than take screenshots and use Whammy to compile a webm file which you can than run through ffmpeg to get your desired output.
generateVideo(){
const vid = new Whammy.fromImageArray(this.captures, 30);
vid.name = "project_id_238.webm";
vid.lastModifiedDate = new Date();
this.file = URL.createObjectURL(vid);
},
async pauseAll(){
this.pauseVideo();
if(this.animations.length){
this.pauseLotties()
}
this.captures.push(this.canvas.toDataURL('image/webp'));
if(!this.ended){
setTimeout(()=>{
this.pauseAll();
}, 500);
}
},
async pauseVideo(){
console.log("curretTime",this.video.currentTime);
console.log("duration", this.video.duration);
this.video.pause();
const oneFrame = 1/30;
this.video.currentTime += oneFrame;
},
async pauseLotties(){
lottie.freeze();
for(let i =0; i<this.animations.length; i++){
let step =0;
let animation = this.animations[i].lottie;
if(animation.currentFrame<=animation.totalFrames){
step = animation.currentFrame + animation.totalFrames/30;
}
lottie.goToAndStop(step, true, animation.name);
}
}

How do I show original image size in modal?

I have an app where images are drawn up on pages at screen width or less, but if I click them, I open a modal to show the image at full size. It works fine on ios but for some reason Android doesn't show the original image, but a much smaller sized image. I can't for the life of me figure out why it won't show the original image from url. I'm using a template for my modal:
<ScrollView #imgScroll orientation="horizontal">
<Image [src]="img" (loaded)="onImageLoaded($event);" (pinch)="onPinch($event)" #dragImage class="largeImage" stretch="none"></Image>
</ScrollView>
Then in my code, I'm setting the scroller and allowing the user to be able to drag the image around to inspect the entire image.
onImageLoaded(args: EventData){
let dragImage = <Image>args.object;
if(dragImage){
setTimeout(()=>{
this.imgSize = dragImage.getActualSize();
if(this.imgSize.width>this.imgSize.height){
this.orientation = "landscape";
this.imageScroller.scrollToHorizontalOffset(this.imgSize.width / 4, true);
} else {
this.orientation = "portrait";
this.imageScroller.scrollToVerticalOffset(this.imgSize.width / 4, true);
}
console.log("Image Size: ", this.imgSize);
},1500);
}
}
onImagePan(args: PanGestureEventData){
if (args.state === 1){
this.prevX = 0
this.prevY = 0;
}
if (args.state === 2) // panning
{
this.dragImageItem.translateX += args.deltaX - this.prevX;
this.dragImageItem.translateY += args.deltaY - this.prevY;
this.prevX = args.deltaX;
this.prevY = args.deltaY;
} else if (args.state === 3){
this.dragImageItem.animate({
translate: { x: 0, y: 0 },
duration: 1000,
curve: AnimationCurve.cubicBezier(0.1, 0.1, 0.1, 1)
});
}
}
onPinch(args: PinchGestureEventData) {
console.log("Pinch scale: " + args.scale + " state: " + args.state);
if(this.imgSize){
if (args.state === 1) {
var newOriginX = args.getFocusX() - this.dragImageItem.translateX;
var newOriginY = args.getFocusY() - this.dragImageItem.translateY;
}
}
}
Here is what gets logged to the console:
Image Size: {
"width": 533.3333333333334,
"height": 127.33333333333333
}
The actual image dimensions are 900 x 600.
It sort of appears Android is caching the image from the page and not showing the actual image from the url. I don't get why iOS works but Android does not. Anyone have any ideas?
Version: 1.17.0-v.2019.5.31.1 (latest)
NativeScript CLI version: 5.4.2
CLI extension nativescript-cloud version: 1.17.6
CLI extension nativescript-starter-kits version: 0.3.5
You are comparing device independent pixels with actual pixels.
getActualSize() returns width and height in DPI format, use utils.layout. toDevicePixels(getActualSize().width) Or utils.layout. toDevicePixels(getActualSize().height) to get actual pixel value.

Is it possible to feed MediaStream frames at lower framerate?

I would like to use MediaStream.captureStream() method, but it is either rendered useless due to specification and bugs or I am using it totally wrong.
I know that captureStream gets maximal framerate as the parameter, not constant and it does not even guarantee that, but it is possible to change MediaStream currentTime (currently in Chrome, in Firefox it has no effect but in return there is requestFrame, not available at Chrome), but the idea of manual frame requests or setting the placement of the frame in the MediaStream should override this effect. It doesn't.
In Firefox it smoothly renders the video, frame by frame, but the video result is as long as wall clock time used for processing.
In Chrome there are some dubious black frames or reordered ones (currently I do not care about it until the FPS matches), and the manual setting of currentTime gives nothing, the same result as in FF.
I use modified code from MediaStream Capture Canvas and Audio Simultaneously answer.
const FPS = 30;
var cStream, vid, recorder, chunks = [], go = true,
Q = 61, rec = document.getElementById('rec'),
canvas = document.getElementById('canvas'),
ctx = canvas.getContext('2d');
ctx.strokeStyle = 'rgb(255, 0, 0)';
function clickHandler() {
this.textContent = 'stop recording';
//it has no effect no matter if it is empty or set to 30
cStream = canvas.captureStream(FPS);
recorder = new MediaRecorder(cStream);
recorder.ondataavailable = saveChunks;
recorder.onstop = exportStream;
this.onclick = stopRecording;
recorder.start();
draw();
}
function exportStream(e) {
if (chunks.length) {
var blob = new Blob(chunks)
var vidURL = URL.createObjectURL(blob);
var vid2 = document.createElement('video');
vid2.controls = true;
vid2.src = vidURL;
vid2.onend = function() {
URL.revokeObjectURL(vidURL);
}
document.body.insertBefore(vid2, vid);
} else {
document.body.insertBefore(document.createTextNode('no data saved'), canvas);
}
}
function saveChunks(e) {
e.data.size && chunks.push(e.data);
}
function stopRecording() {
go = false;
this.parentNode.removeChild(this);
recorder.stop();
}
var loadVideo = function() {
vid = document.createElement('video');
document.body.insertBefore(vid, canvas);
vid.oncanplay = function() {
rec.onclick = clickHandler;
rec.disabled = false;
canvas.width = vid.videoWidth;
canvas.height = vid.videoHeight;
vid.oncanplay = null;
ctx.drawImage(vid, 0, 0);
}
vid.onseeked = function() {
ctx.drawImage(vid, 0, 0);
/*
Here I want to include additional drawing per each frame,
for sure taking more than 180ms
*/
if(cStream && cStream.requestFrame) cStream.requestFrame();
draw();
}
vid.crossOrigin = 'anonymous';
vid.src = 'https://dl.dropboxusercontent.com/s/bch2j17v6ny4ako/movie720p.mp4';
vid.currentTime = 0;
}
function draw() {
if(go && cStream) {
++Q;
cStream.currentTime = Q / FPS;
vid.currentTime = Q / FPS;
}
};
loadVideo();
<button id="rec" disabled>record</button><br>
<canvas id="canvas" width="500" height="500"></canvas>
Is there a way to make it operational?
The goal is to load video, process every frame (which is time consuming in my case) and return the processed one.
Footnote: I do not want to use ffmpeg.js, external server or other technologies. I can process it by classic ffmpeg without using JavaScript at all, but this is not the point of this question, it is more about MediaStream usability / maturity. The context is Firefox/Chrome here, but it may be node.js or nw.js as well. If this is possible at all or awaiting bug fixes, the next question would be feeding audio to it, but I think it would be good as separate question.

NReco video cut

I have written a function to cut a video using NReco library.
public void SplitVideo(string SourceFile,string DestinationFile,int StartTime,int EndTime)
{
var ffMpegConverter = new FFMpegConverter();
ffMpegConverter.ConvertMedia(SourceFile, null, DestinationFile, null,
new ConvertSettings()
{
Seek = StartTime,
MaxDuration = (EndTime-StartTime), // chunk duration
VideoCodec = "copy",
AudioCodec = "copy"
});
}
This is working and give me a video starting from the beginning of the video to the max duration i have assign. It is not starting from the seek value position and to the max duration. Can some one help me on this.
I have found the answer for this issue. May this help someone.
I was using worong codecs. You have to use correct codec type according to the file type you are converting. here i am using a mp4 file. So i had to use
libx264 and mp3. Beelow is the sample code
public void SplitVideo(string SourceFile,string DestinationFile,int StartTime,int EndTime)
{
var ffMpegConverter = new FFMpegConverter();
ffMpegConverter.ConvertMedia(SourceFile, null, DestinationFile, null,
new ConvertSettings()
{
Seek = StartTime,
MaxDuration = (EndTime-StartTime), // chunk duration
VideoCodec = "libx264",
AudioCodec = "mp3"
});
}

Resources