ffmpeg Node Screen recording does not have audio - ffmpeg

I am building a screen recording application using fluent-ffmpeg.
let command = ffmpeg({ source: '1:0' })
command.inputFormat('avfoundation')
command.fps(60)
command.videoCodec('libx264')
command.audioBitrate('320k')
command.audioCodec('libmp3lame')
command.addOption('-pix_fmt', 'yuv420p')
command.save('output.mp4')
I am able to record the screen, but the audio of the screen is not being recorded. For example: When I play a YouTube clip and record my screen, the video will be there but the audio from the YouTube clip not.
Am I missing an option in ffmpeg? I cant get the screen audio (I am not talking about a microphone).
Edit: I want to use ffmpeg instead of Electrons desktopCapture as ffmpeg has more options and is more powerful.

Related

How to get image frames from video in Flutter in Windows?

In my flutter desktop application I am playing video using dart_vlc package. I want to take image frames (like screenshot) from video file of different video positions. At first I tried with screenshot package where I found two problems.
It takes screenshot of the NativeVideo (A widget for showing video) widget only, but not includes the image frame inside the widget.
And video control buttons (play, pause, volume) comes up in the screenshot which I don't want.
There's some other packages for exporting image frames from video such as export_video_frame or video_thumbnail but both of them are not supported in Windows. I want to do the same thing they do in Windows.
So how to get image frames from a video file playing in a video player in Windows?
dart_vlc package itself provides a function to take snapshots from a video file currently paying in the video player. The general structure is
videoPlayer.takeSnapshot(file, width, height)
Eaxmple:
videoPlayer.takeSnapshot(File('C:/snapshots/snapshot1.png'), 600, 400)
This function call captures the video player's current position as an image and saves it in the mentioned file.

Video was encoded with a new width + height along with the old one. Can I re-encode with just the old dimensions using ffmpeg?

I've got a video out of OBS that play's normally on my system if I open it with VLC for example, but when I import it into my editor (Adobe Premiere) it gets weirdly cropped down. When inspecting the data for the video it's because for some reason the video gets encoded with a new width and height over top of the old one! Is there a way using ffmpeg to re-encode/transcode the video to a new file with only the original width and height?
Bonus question: would there be a way for me to extract the audio channels from my video as separate .mp3s? There are 4 audio channels on the video
Every time you reencode a video you will lose quality. Scaling the video up will not reintroduce details that were lost when it was scaled down.

Where is the default audio visualisation in the ffplay.c?

When you play audio with ffplay, or video with the -vn flag, ffplay displays a spectrogram. I'm trying to find which part of the ffplay.c code is responsible for that.
I want to enable/disable video with a press of a button, and also change the audio visualisation to something else.
I suppose the filter that does that is called showspectrum, but I don't see it anywhere. And I can't find any interesting avfilter_graph_create_filter either.
In the function video_display, there's
if (is->audio_st && is->show_mode != SHOW_MODE_VIDEO)
video_audio_display(is);
And video_audio_display generates the spectrogram.

How do I crop a rectangle from a video, from the command-line?

I'm capturing a video of a test-run on an iOS simulator, and I have an automated script to tell QuickTime to record only a rectangle of video.
This works well on the desktop displaying to the physically connected screen, but for remote desktop users, the resulting video is a garbled version of bits of the primary screen instead of the rectangle of the secondary user's view.
It's even worse for a Mac Pro running VMs: ALL users get a blank, black rectangle. This obtained for Yosemite and still obtains for ElCap.
Oddly, capturing a full screen works properly for all sessions, so I could record the whole thing, and then crop out the window I want - it doesn't move.
Is there a good command-line tool that can crop a rectangle from a (full screen) video stream? I looked at ffmpeg, but couldn't see it listed.
Thanks
You can use ffmpeg to do this as well,
ffmpeg -i in.mp4 -filter:v "crop=out_w:out_h:x:y" out.mp4
source: https://video.stackexchange.com/questions/4563/how-can-i-crop-a-video-with-ffmpeg
To view a 100 pixel square crop at position {x=100,y=100} :
mplayer -vf "crop=100:100:100:100" foo.avi

ffmpeg picture slideshow with audio

I'm trying to make a batch of videos for uploading to youtube. My emphasis is on the audio (mostly MP3, with some WMA). I have several image files that need to be picked up at random to go with the audio (ie) display an image for 5 seconds before showing the next. I want the video to stop when the audio stream ends. How should I use ffmpeg to achieve this ?
Ref:
http://trac.ffmpeg.org/wiki/Create%20a%20video%20slideshow%20from%20images

Resources