How to add "dmb1" four char code in ffmpeg? - ffmpeg

I'm trying to stream a video from webcam to the local computer. The stream has resolution of 3840x2160 and 30fps. Computer I'm using is Mac Pro. However when I run it with next command:
ffmpeg -f avfoundation -framerate 30 -video_size 3840x2160 -pix_fmt nv12 -probesize "50M" -i "0" -pix_fmt nv12 -preset ultrafast -vcodec libx264 -tune zerolatency -f mpegts udp://192.168.1.5:5100/mystream
it has a latency of 3-4 seconds. This problem is not present in Chromium, when using MediaStream API stream is displayed in realtime.
I believe that's because Chromium has "dmb1" four char code supported:
+ (media::VideoPixelFormat)FourCCToChromiumPixelFormat:(FourCharCode)code {
switch (code) {
case kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange:
return media::PIXEL_FORMAT_NV12; // Mac fourcc: "420v".
case kCVPixelFormatType_422YpCbCr8:
return media::PIXEL_FORMAT_UYVY; // Mac fourcc: "2vuy".
case kCMPixelFormat_422YpCbCr8_yuvs:
return media::PIXEL_FORMAT_YUY2;
case kCMVideoCodecType_JPEG_OpenDML:
return media::PIXEL_FORMAT_MJPEG; // Mac fourcc: "dmb1".
default:
return media::PIXEL_FORMAT_UNKNOWN;
}
}
To set pixel format Chromium is using next piece of code:
NSDictionary* videoSettingsDictionary = #{
(id)kCVPixelBufferWidthKey : #(width),
(id)kCVPixelBufferHeightKey : #(height),
(id)kCVPixelBufferPixelFormatTypeKey : #(best_fourcc),
AVVideoScalingModeKey : AVVideoScalingModeResizeAspectFill
};
[_captureVideoDataOutput setVideoSettings:videoSettingsDictionary];
I tried doing the same thing in ffmpeg by changing avfoundation.m file. First I added new pixel format AV_PIX_FMT_MJPEG:
static const struct AVFPixelFormatSpec avf_pixel_formats[] = {
{ AV_PIX_FMT_MONOBLACK, kCVPixelFormatType_1Monochrome },
{ AV_PIX_FMT_RGB555BE, kCVPixelFormatType_16BE555 },
{ AV_PIX_FMT_RGB555LE, kCVPixelFormatType_16LE555 },
{ AV_PIX_FMT_RGB565BE, kCVPixelFormatType_16BE565 },
{ AV_PIX_FMT_RGB565LE, kCVPixelFormatType_16LE565 },
{ AV_PIX_FMT_RGB24, kCVPixelFormatType_24RGB },
{ AV_PIX_FMT_BGR24, kCVPixelFormatType_24BGR },
{ AV_PIX_FMT_0RGB, kCVPixelFormatType_32ARGB },
{ AV_PIX_FMT_BGR0, kCVPixelFormatType_32BGRA },
{ AV_PIX_FMT_0BGR, kCVPixelFormatType_32ABGR },
{ AV_PIX_FMT_RGB0, kCVPixelFormatType_32RGBA },
{ AV_PIX_FMT_BGR48BE, kCVPixelFormatType_48RGB },
{ AV_PIX_FMT_UYVY422, kCVPixelFormatType_422YpCbCr8 },
{ AV_PIX_FMT_YUVA444P, kCVPixelFormatType_4444YpCbCrA8R },
{ AV_PIX_FMT_YUVA444P16LE, kCVPixelFormatType_4444AYpCbCr16 },
{ AV_PIX_FMT_YUV444P, kCVPixelFormatType_444YpCbCr8 },
{ AV_PIX_FMT_YUV422P16, kCVPixelFormatType_422YpCbCr16 },
{ AV_PIX_FMT_YUV422P10, kCVPixelFormatType_422YpCbCr10 },
{ AV_PIX_FMT_YUV444P10, kCVPixelFormatType_444YpCbCr10 },
{ AV_PIX_FMT_YUV420P, kCVPixelFormatType_420YpCbCr8Planar },
{ AV_PIX_FMT_NV12, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange },
{ AV_PIX_FMT_YUYV422, kCVPixelFormatType_422YpCbCr8_yuvs },
{ AV_PIX_FMT_MJPEG, kCMVideoCodecType_JPEG_OpenDML }, //dmb1
#if !TARGET_OS_IPHONE && __MAC_OS_X_VERSION_MIN_REQUIRED >= 1080
{ AV_PIX_FMT_GRAY8, kCVPixelFormatType_OneComponent8 },
#endif
{ AV_PIX_FMT_NONE, 0 }
};
After that I tried to hardcode it:
pxl_fmt_spec = avf_pixel_formats[22];
ctx->pixel_format = pxl_fmt_spec.ff_id;
pixel_format = [NSNumber numberWithUnsignedInt:pxl_fmt_spec.avf_id];
capture_dict = [NSDictionary dictionaryWithObject:pixel_format
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
[ctx->video_output setVideoSettings:capture_dict];
Code compiles and builds successfully, but when I run it with above command, without -pix_fmt specified, program enters infinite loop in get_video_config function:
while (ctx->frames_captured < 1) {
CFRunLoopRunInMode(kCFRunLoopDefaultMode, 0.1, YES);
}
It looks obvious that ffmpeg is not able to load first frame. My camera is more than capable of supporting this pixel and stream formats. I proved it with this piece of code which comes after ffmpeg selects which format to use for specified width, height and fps:
FourCharCode fcc = CMFormatDescriptionGetMediaSubType([selected_format formatDescription]);
char fcc_string[5] = { 0, 0, 0, 0, '\0'};
fcc_string[0] = (char) (fcc >> 24);
fcc_string[1] = (char) (fcc >> 16);
fcc_string[2] = (char) (fcc >> 8);
fcc_string[3] = (char) fcc;
av_log(s, AV_LOG_ERROR, "Selected format: %s\n", fcc_string);
Above code prints "Selected format: dmb1".
Can someone tell me why ffmpeg can't load first frame and how to add new pixel format in this library?
Also, any suggestion on how to resolve input latency of 3 seconds in some other way is more than welcome.
EDIT:
If you try setting any other pixel format in Chromium other then MJPEG there is a latency of 2 seconds. When I say "setting" I mean changing Chromium source code and recompiling it. I am pretty sure that the problem is in pixel format, because camera is sending dmb1 and ffmpeg doesn't know about that format.
Also latency is only present on MacOS.

Related

Cannot find ffmpeg when moving the code to a new computer

I have a code that works on my computer but when I moved it to a new one it doesn't find the ffmpeg dependency.
It happens on these lines of code:
var videoshow = require('videoshow')
var image = [{path: './screenshot.jpg'}]
var videoOption = {
loop: 10,
fps: 25,
transition: false,
transitionDuration: 0, // seconds
videoBitrate: 1024,
videoCodec: 'libx264',
size: '640x?',
audioBitrate: '128k',
audioChannels: 2,
format: 'mp4',
pixelFormat: 'yuv420p'
}
//call the videoshow library
videoshow(image,videoOption).save(filename+"_"+"movie.mp4").on('start',function(command){
console.log("conversion started" + command)
}).on('error',function(err,stdout,stderr){
console.log("some error occured"+ err)
}).on('end',function(output){
console.log("conversion complete "+ output)
It throws an error "Cannot find ffmpeg".
I tried to do npm install or npm install ffmpeg but it didn't help.
I think this happens because I don't know how to make dependencies work on a different computer.
Any help would be appreciated!
Apparently there was a problem with videoshow that couldn't find ffmpeg (although it was installed) so I added a these dependencies:
"dependencies": {
"#ffmpeg-installer/ffmpeg": "^1.1.0",
"#ffprobe-installer/ffprobe": "^1.2.0",
"ffprobe": "^1.1.2"
}
And I added these lines of code to the project:
const ffmpegPath = require('#ffmpeg-installer/ffmpeg').path;
const ffprobePath = require('#ffprobe-installer/ffprobe').path;
const ffmpeg = require('fluent-ffmpeg');
ffmpeg.setFfmpegPath(ffmpegPath);
ffmpeg.setFfprobePath(ffprobePath);
And it fixed the problem

Capture frames from a canvas at 60 fps

Hey everyone so I have a canvas that I write a rather complex animation to. Let's say I want to take screenshots of the canvas at 60 frames a second. The canvas doesn't have to play in real-time I just need it to capture 60 frames a second so I can send the screenshots to FFmpeg and make a video. I know I can use canvas.toDataURL but how do I capture the frames smoothly?
Use this code to pause the video and lottie animations if you are using lottie-web for after effects content in the browser. Than take screenshots and use Whammy to compile a webm file which you can than run through ffmpeg to get your desired output.
generateVideo(){
const vid = new Whammy.fromImageArray(this.captures, 30);
vid.name = "project_id_238.webm";
vid.lastModifiedDate = new Date();
this.file = URL.createObjectURL(vid);
},
async pauseAll(){
this.pauseVideo();
if(this.animations.length){
this.pauseLotties()
}
this.captures.push(this.canvas.toDataURL('image/webp'));
if(!this.ended){
setTimeout(()=>{
this.pauseAll();
}, 500);
}
},
async pauseVideo(){
console.log("curretTime",this.video.currentTime);
console.log("duration", this.video.duration);
this.video.pause();
const oneFrame = 1/30;
this.video.currentTime += oneFrame;
},
async pauseLotties(){
lottie.freeze();
for(let i =0; i<this.animations.length; i++){
let step =0;
let animation = this.animations[i].lottie;
if(animation.currentFrame<=animation.totalFrames){
step = animation.currentFrame + animation.totalFrames/30;
}
lottie.goToAndStop(step, true, animation.name);
}
}

Discord.NET 1.0.2 sending voice to voice channel not working

I did everything like the Discord.Net Documentation guide on voice -
https://discord.foxbot.me/latest/guides/voice/sending-voice.html
and it didn't work the bot just joined the voice channel but it dont make any sound.
and i have ffmpeg installed in PATH and ffmpeg.exe in my bot Directory along with opus.dll and libsodium.dll so i dont know what is the problem...
public class gil : ModuleBase<SocketCommandContext>
{
[Command("join")]
public async Task JoinChannel(IVoiceChannel channel = null)
{
// Get the audio channel
channel = channel ?? (Context.Message.Author as IGuildUser)?.VoiceChannel;
if (channel == null) { await Context.Message.Channel.SendMessageAsync("User must be in a voice channel, or a voice channel must be passed as an argument."); return; }
// For the next step with transmitting audio, you would want to pass this Audio Client in to a service.
var audioClient = await channel.ConnectAsync();
await SendAsync(audioClient,"audio/hello.mp3");
}
private Process CreateStream(string path)
{
return Process.Start(new ProcessStartInfo
{
FileName = "ffmpeg.exe",
Arguments = $"-hide_banner -loglevel panic -i \"{path}\" -ac 2 -f s16le -ar 48000 pipe:1",
UseShellExecute = false,
RedirectStandardOutput = true,
});
}
private async Task SendAsync(IAudioClient client, string path)
{
// Create FFmpeg using the previous example
using (var ffmpeg = CreateStream(path))
using (var output = ffmpeg.StandardOutput.BaseStream)
using (var discord = client.CreatePCMStream(AudioApplication.Mixed))
{
try { await output.CopyToAsync(discord); }
finally { await discord.FlushAsync(); }
}
}
}
please help

NReco video cut

I have written a function to cut a video using NReco library.
public void SplitVideo(string SourceFile,string DestinationFile,int StartTime,int EndTime)
{
var ffMpegConverter = new FFMpegConverter();
ffMpegConverter.ConvertMedia(SourceFile, null, DestinationFile, null,
new ConvertSettings()
{
Seek = StartTime,
MaxDuration = (EndTime-StartTime), // chunk duration
VideoCodec = "copy",
AudioCodec = "copy"
});
}
This is working and give me a video starting from the beginning of the video to the max duration i have assign. It is not starting from the seek value position and to the max duration. Can some one help me on this.
I have found the answer for this issue. May this help someone.
I was using worong codecs. You have to use correct codec type according to the file type you are converting. here i am using a mp4 file. So i had to use
libx264 and mp3. Beelow is the sample code
public void SplitVideo(string SourceFile,string DestinationFile,int StartTime,int EndTime)
{
var ffMpegConverter = new FFMpegConverter();
ffMpegConverter.ConvertMedia(SourceFile, null, DestinationFile, null,
new ConvertSettings()
{
Seek = StartTime,
MaxDuration = (EndTime-StartTime), // chunk duration
VideoCodec = "libx264",
AudioCodec = "mp3"
});
}

phantomjs screenshots and ffmpeg frame missing

I have problem making video from website screenshots taken from phantomjs.
the phantomjs did not make screenshots for all frames within the same second and even not all seconds there , there is huge missing frames .
the result is high speed video playing with many jumps in video effects .
test.js :
var page = require('webpage').create(),
address = 'http://raphaeljs.com/polar-clock.html',
duration = 5, // duration of the video, in seconds
framerate = 24, // number of frames per second. 24 is a good value.
counter = 0,
width = 1024,
height = 786;
frame = 10001;
page.viewportSize = { width: width, height: height };
page.open(address, function(status) {
if (status !== 'success') {
console.log('Unable to load the address!');
phantom.exit(1);
} else {
window.setTimeout(function () {
page.clipRect = { top: 0, left: 0, width: width, height: height };
window.setInterval(function () {
counter++;
page.render('newtest/image'+(frame++)+'.png', { format: 'png' });
if (counter > duration * framerate) {
phantom.exit();
}
}, 1/framerate);
}, 200);
}
});
this will create 120 image , this is correct count , but when you see the images one by one you will see many duplicate the same contents and many missing frames
ffmpeg :
fmpeg -start_number 10001 -i newtest/image%05d.png -c:v libx264 -r 24 -pix_fmt yuv420p out.mp4
I know this script and ffmpeg command not perfect , because I did hundred of changes without lucky, and I lost the correct setting understanding .
an anyone guide me to fix this ?.
thank you all

Resources