Ffmpeg, Concatenate 4 videos with different codec, size and FPS - ffmpeg

I am trying to concatenate 4 videos with different codec, size and FPS with the below query in node.js. I have been using child_process spawn.
const mergeAllVideo = async () => {
try {
const all = [
'-y',
'-i', './gifs_0.mp4',
'-i', './gifs_1.mp4',
'-i', './gifs_2.mp4',
'-i', './gifs_3.mp4',
'-filter_complex', "[0:v][0:a][1:v][1:a][2:v][2:a][3:v][3:a] concat=n=4:v=1:a=1 [vv] [aa]",
'-map', '[vv]',
'-map', '[aa]',
'./allMerged.mp4'
];
const proc = spawn(cmd, all);
proc.stdout.on('end', function () {
console.log("Added mergeAllVideo !!! \n");
end = new Date().getTime();
const diffinsec = (end-start)/1000
console.log("Execution time : ",diffinsec,'s');
});
proc.stdout.on('error', function (err) {
console.log(" ::::: Error at all : ", err);
});
} catch (err) {
console.log(":::::::: getting error at mergeAllVideo() ", err);
}
}
Query is running successfully but no video file is being generated in the given directory. Could someone please help me?

I to am working on this. my approach is a "for file in" Queue building one.
pick the head of the queue (the first clip) and format it correctly
start a "for file in" loop
format $file correctly
concatenate resulting file onto end of queue file

Related

What would cause a for / foreach loop to break without explicitly calling a break?

I have a durable function that calls a method that simply adds a row to an efcore object. It doesn't call db save.
When I step through the code, and get to the for loop, it will immediately jump to the line after the for loop. If I step into the call to add the efcore object, and back to the for loop, it continues and loops to the next item. If I press F5 to let it go without debugging, it immediately "breaks" the for loop.
It jumps out of the for loop where i wrote //HERE!!!!!!
I'm pulling my hair out on this one.
obligatory code:
//foreach (stagingFileMap stagingFileMap in fileMaps)
foreach (stagingFileMap stagingFileMap in fileMaps)
{
if (ActivitySyncHelper.IsSyncCancelled(aso, _configuration))
{
break;
}
if (!string.IsNullOrEmpty(stagingFileMap.URL))
{
// Ensure the url is valid
try
{
string x = await GetTotalBytes(stagingFileMap.URL);
double.TryParse(x, out double fileByteCount);
if (fileByteCount > 0)
{
// Create or update the video in vimeo
if (string.IsNullOrEmpty(stagingFileMap.VimeoId))
{
// Azure won't be ready with its backups, so use confex host for video 'get'
string title = stagingFileMap.FileName;
if (stagingFileMap.FileName.Length > 127)
{
title = stagingFileMap.FileName.Substring(0, 127);
}
Video video = vimeoClient.UploadPullLinkAsync(stagingFileMap.URL, title, stagingFileMap.id, meetingIdFolder.Uri).Result;
stagingFileMap.VimeoId = video.Id.ToString();
stagingFileMap.VimeoId = video.Id.ToString();
//HERE!!!!!!
await syncLog.WriteInfoMsg($"Vimeo create {stagingFileMap.FileName}");
//HERE!!!!!!
}
else
{
// Attempt to pull the existing video and update it
if (long.TryParse(stagingFileMap.VimeoId, out long videoId))
{
Video video = vimeoClient.GetVideoAsync(videoId).Result;
if (video.Id.HasValue)
{
Video res = await vimeoClient.UploadPullReplaceAsync(stagingFileMap.URL, video.Id.Value, fileByteCount);
await syncLog.WriteInfoMsg($"Vimeo replace {stagingFileMap.FileName} id {res.Id}");
}
}
}
break;
}
}
catch (Exception ex)
{
// IDK what to do besides skip it and continue
// log something once logging works
await syncLog.WriteErrorMsg(aso, ex.Message);
await syncLog.Save();
continue;
}
// We need to save here requently because if there is big error, all the work syncing to vimeo will be desync with the DB
dbContext.Update(stagingFileMap);
await dbContext.SaveChangesAsync();
await syncLog.Save();
}
}
await dbContext.DisposeAsync();
public async Task WriteInfoMsg( string msg)
{
SyncAttemptDetail sad = new()
{
SyncAttemptId = _id,
Message = msg,
MsgLevel = SyncAttemptMessageLevel.Info,
AddDate = DateTime.UtcNow,
AddUser = "SYSTEM"
};
await _dbContext.SyncAttemptDetail.AddAsync(sad);
}
I'm dumb. Theres LITERALLY a break command in there.
The await will create a task and immediately return it (the debugger will follow this path too). The loop will continue in the task.
To attach the debugger to the task, add a break point after the await.

FFMPEG - how to transcode input stream while cutting off first few seconds of video and audio

I am using ffmpeg to transcode a screen-record (x11) input stream to MP4. I would like to cut off the first ~10 seconds of the stream, which is just a blank screen (this is intentional).
I understand how to trim video with ffmpeg when converting from mp4 to another mp4, but i can't find any working solution for processing an input stream while accounting for delay and audio/video syncing.
Here is my current code:
const { spawn } = require('child_process');
const { S3Uploader } = require('./utils/upload');
const MEETING_URL = process.env.MEETING_URL || 'Not present in environment';
console.log(`[recording process] MEETING_URL: ${MEETING_URL}`);
const args = process.argv.slice(2);
const BUCKET_NAME = args[0];
console.log(`[recording process] BUCKET_NAME: ${BUCKET_NAME}`);
const BROWSER_SCREEN_WIDTH = args[1];
const BROWSER_SCREEN_HEIGHT = args[2];
const MEETING_ID = args[3];
console.log(`[recording process] BROWSER_SCREEN_WIDTH: ${BROWSER_SCREEN_WIDTH}, BROWSER_SCREEN_HEIGHT: ${BROWSER_SCREEN_HEIGHT}, TASK_NUMBER: 43`);
const VIDEO_BITRATE = 3000;
const VIDEO_FRAMERATE = 30;
const VIDEO_GOP = VIDEO_FRAMERATE * 2;
const AUDIO_BITRATE = '160k';
const AUDIO_SAMPLERATE = 44100;
const AUDIO_CHANNELS = 2
const DISPLAY = process.env.DISPLAY;
const transcodeStreamToOutput = spawn('ffmpeg',[
'-hide_banner',
'-loglevel', 'error',
// disable interaction via stdin
'-nostdin',
// screen image size
// '-s', `${BROWSER_SCREEN_WIDTH}x${BROWSER_SCREEN_HEIGHT}`,
'-s', '1140x720',
// video frame rate
'-r', `${VIDEO_FRAMERATE}`,
// hides the mouse cursor from the resulting video
'-draw_mouse', '0',
// grab the x11 display as video input
'-f', 'x11grab',
'-i', ':1.0+372,8',
// '-i', `${DISPLAY}`,
// grab pulse as audio input
'-f', 'pulse',
'-ac', '2',
'-i', 'default',
// codec video with libx264
'-c:v', 'libx264',
'-pix_fmt', 'yuv420p',
'-profile:v', 'main',
'-preset', 'veryfast',
'-x264opts', 'nal-hrd=cbr:no-scenecut',
'-minrate', `${VIDEO_BITRATE}`,
'-maxrate', `${VIDEO_BITRATE}`,
'-g', `${VIDEO_GOP}`,
// apply a fixed delay to the audio stream in order to synchronize it with the video stream
'-filter_complex', 'adelay=delays=1000|1000',
// codec audio with aac
'-c:a', 'aac',
'-b:a', `${AUDIO_BITRATE}`,
'-ac', `${AUDIO_CHANNELS}`,
'-ar', `${AUDIO_SAMPLERATE}`,
// adjust fragmentation to prevent seeking(resolve issue: muxer does not support non seekable output)
'-movflags', 'frag_keyframe+empty_moov+faststart',
// set output format to mp4 and output file to stdout
'-f', 'mp4', '-'
]
);
transcodeStreamToOutput.stderr.on('data', data => {
console.log(`[transcodeStreamToOutput process] stderr: ${(new Date()).toISOString()} ffmpeg: ${data}`);
});
const timestamp = new Date();
const year = timestamp.getFullYear();
const month = timestamp.getMonth() + 1;
const day = timestamp.getDate();
const hour = timestamp.getUTCHours();
console.log(MEETING_ID);
const fileName = `${year}/${month}/${day}/${hour}/${MEETING_ID}.mp4`;
new S3Uploader(BUCKET_NAME, fileName).uploadStream(transcodeStreamToOutput.stdout);
// event handler for docker stop, not exit until upload completes
process.on('SIGTERM', (code, signal) => {
console.log(`[recording process] exited with code ${code} and signal ${signal}(SIGTERM)`);
process.kill(transcodeStreamToOutput.pid, 'SIGTERM');
});
// debug use - event handler for ctrl + c
process.on('SIGINT', (code, signal) => {
console.log(`[recording process] exited with code ${code} and signal ${signal}(SIGINT)`)
process.kill('SIGTERM');
});
process.on('exit', function(code) {
console.log('[recording process] exit code', code);
});
Any help would be greatly appreciated!
Add -ss X after the last input to cut off the first X seconds.

Discord.NET 1.0.2 sending voice to voice channel not working

I did everything like the Discord.Net Documentation guide on voice -
https://discord.foxbot.me/latest/guides/voice/sending-voice.html
and it didn't work the bot just joined the voice channel but it dont make any sound.
and i have ffmpeg installed in PATH and ffmpeg.exe in my bot Directory along with opus.dll and libsodium.dll so i dont know what is the problem...
public class gil : ModuleBase<SocketCommandContext>
{
[Command("join")]
public async Task JoinChannel(IVoiceChannel channel = null)
{
// Get the audio channel
channel = channel ?? (Context.Message.Author as IGuildUser)?.VoiceChannel;
if (channel == null) { await Context.Message.Channel.SendMessageAsync("User must be in a voice channel, or a voice channel must be passed as an argument."); return; }
// For the next step with transmitting audio, you would want to pass this Audio Client in to a service.
var audioClient = await channel.ConnectAsync();
await SendAsync(audioClient,"audio/hello.mp3");
}
private Process CreateStream(string path)
{
return Process.Start(new ProcessStartInfo
{
FileName = "ffmpeg.exe",
Arguments = $"-hide_banner -loglevel panic -i \"{path}\" -ac 2 -f s16le -ar 48000 pipe:1",
UseShellExecute = false,
RedirectStandardOutput = true,
});
}
private async Task SendAsync(IAudioClient client, string path)
{
// Create FFmpeg using the previous example
using (var ffmpeg = CreateStream(path))
using (var output = ffmpeg.StandardOutput.BaseStream)
using (var discord = client.CreatePCMStream(AudioApplication.Mixed))
{
try { await output.CopyToAsync(discord); }
finally { await discord.FlushAsync(); }
}
}
}
please help

Core Audio : Recording in .wav format doesn't work properly

I use AudioFileWritePackets() to write data, and it returns no error during
recording. The data format flag I am using is:
if(appDelegate.screenRecording){
cPacket = 0;inNumPackets = inCompleteAQBuffer->mPacketDescriptionCount;
if (AudioFileWritePackets(nAudioFile, false, numBytes, mPacketDescs, mPacketIndex, &nPackets, inCompleteAQBuffer->mAudioData) == noErr)
{
mPacketIndex += nPackets;
NSLog(#"sample result");
}
else {
NSLog(#"ext err");
}
}
Every time it's calling Nslog(ext err);
Please help me solve this.

In this node.js image module, which should I use? (readStream or from path?)

What is a "Stream"? Which one below should I use for fastest?
Is there a way to open it from Memory, like a buffer?
// can provide either a file path or a ReadableStream
// (from a local file or incoming network request)
var readStream = fs.createReadStream('/path/to/my/img.jpg');
gm(readStream, 'img.jpg')
.write('/path/to/reformat.png', function (err) {
if (!err) console.log('done');
});
// can also stream output to a ReadableStream
// (can be piped to a local file or remote server)
gm('/path/to/my/img.jpg')
.resize('200', '200')
.stream(function (err, stdout, stderr) {
var writeStream = fs.createWriteStream('/path/to/my/resized.jpg');
stdout.pipe(writeStream);
});
// pass a format or filename to stream() and
// gm will provide image data in that format
gm('/path/to/my/img.jpg')
.stream('png', function (err, stdout, stderr) {
var writeStream = fs.createWriteStream('/path/to/my/reformated.png');
stdout.pipe(writeStream);
});
// combine the two for true streaming image processing
var readStream = fs.createReadStream('/path/to/my/img.jpg');
gm(readStream, 'img.jpg')
.resize('200', '200')
.stream(function (err, stdout, stderr) {
var writeStream = fs.createWriteStream('/path/to/my/resized.jpg');
stdout.pipe(writeStream);
});
// when working with input streams and any 'identify'
// operation (size, format, etc), you must pass "{bufferStream: true}" if
// you also need to convert (write() or stream()) the image afterwards
// NOTE: this temporarily buffers the image stream in Node memory
var readStream = fs.createReadStream('/path/to/my/img.jpg');
gm(readStream, 'img.jpg')
.size({bufferStream: true}, function(err, size) {
this.resize(size.width / 2, size.height / 2)
this.write('/path/to/resized.jpg', function (err) {
if (!err) console.log('done');
});
});
A stream reads data from a file one chunk at a time. It's useful for reading large files without having to store the entire contents in memory.
If you already have a stream opened and it hasn't started emitting data, pass the stream. Otherwise give it a path and it will have to open a new stream.

Resources