When you play audio with ffplay, or video with the -vn flag, ffplay displays a spectrogram. I'm trying to find which part of the ffplay.c code is responsible for that.
I want to enable/disable video with a press of a button, and also change the audio visualisation to something else.
I suppose the filter that does that is called showspectrum, but I don't see it anywhere. And I can't find any interesting avfilter_graph_create_filter either.
In the function video_display, there's
if (is->audio_st && is->show_mode != SHOW_MODE_VIDEO)
video_audio_display(is);
And video_audio_display generates the spectrogram.
Related
I am building a screen recording application using fluent-ffmpeg.
let command = ffmpeg({ source: '1:0' })
command.inputFormat('avfoundation')
command.fps(60)
command.videoCodec('libx264')
command.audioBitrate('320k')
command.audioCodec('libmp3lame')
command.addOption('-pix_fmt', 'yuv420p')
command.save('output.mp4')
I am able to record the screen, but the audio of the screen is not being recorded. For example: When I play a YouTube clip and record my screen, the video will be there but the audio from the YouTube clip not.
Am I missing an option in ffmpeg? I cant get the screen audio (I am not talking about a microphone).
Edit: I want to use ffmpeg instead of Electrons desktopCapture as ffmpeg has more options and is more powerful.
We are developing a stop motion app for kids and schools.
So what we have is:
A sequence of images and audio files (no overlapping audio, in v1. But there can be gaps between them)
What we need to do:
Combine the images to a video with a frame rate between 1-12 fps
Add multiple audio files at a given start times
encode with H265 to mp4 format
I would really like to avoid maintaining a VM or Azure batch jobs running ffmpeg jobs if possible.
Is there any good frameworks or third party APIs?
I have only found transloadit as the closes match but they don't have the option to add multiple audio files.
Any suggestions or experience in this area is very appreciated.
You've mentionned FFmpeg in your tag and it is a tool that checks all the boxes.
For the first part of your project (making a video from images) you should check this link. To sum up, you'll use this kind of command:
ffmpeg -r 12 -f image2 -i PATH_TO_FOLDER/frame%d.png PATH_TO_FOLDER/output.mp4
-r 12 being your framerate, here 12. You control the output format with the file extension. To control the video codec check out this link, you'll need to use the option -c:v libx265before the filename of your output.
With FFmpeg you add audio as you add video, with -i option followed by your filename. If you want to cut audio you should seek in your audio with -ss -t two options good for that. If you want and audio to start at a certain point, check out -itoffset, you can find a lot of examples.
I just want some confirmation, because I have the sneaking suspicion that I wont be able to do what I want to do, given that I already ran into some errors about ffmpeg not being able to overwrite the input file. I still have some hope that what I want to do is some kind of exception, but I doubt it.
I already used ffmpeg to extract a specific frame into its own image file, I've set the thumbnail of a video with an existing image file, but I can't seem to figure out how to set a specific frame from the video as the thumbnail. I want to do this without having to extract the frame into a separate file and I don't want to create an output file, I want to edit the video directly and change the thumbnail using a frame from the video itself. Is that possible?
You're probably better off asking it in IRC zeronode #ffmpeg-devel.
I'd look at "-ss 33.5" or a more precise filter "-vf 'select=gte(n,1000)'" both will give same or very similar result at 30 fps video.
You can pipe the image out to your own process via pipe if you like of course without saving it : "ffmpeg ... -f jpeg -|..."
We have a DirectShow application where we capture video input from USB, multiplex with audio from a WAV file (backing music), overlay audio and video effects, compress and write to an MP4 file.
Originally we were using an audio input source (microphone) and mixing our backing music and sound effects over the top but the decision was made to not capture live audio, and so I thought it would make more sense to use the backing music WAV file itself as the audio source.
Here is the filter graph we have:
backing.wav is a simple WAV file (stored locally), and was added to the graph using IFilterGraph::AddSourceFilter.
The problem is that when the graph is run, no audio samples are delivered from the WAV file. The video part of the graph runs as normal, but it's as if the audio part of the graph simply isn't running.
If I stop the graph in GraphEdit, add the Default DirectSound Device audio renderer and hook that up in place of the AAC Encoder filter and then run the graph again, the audio plays as you would expect.
Additionally, if backing.wav is replaced with an audio capture source like a microphone, audio data flows through as normal.
Does anyone have any ideas why the above graph, using a WAV file as the audio source, would fail to produce any audio samples?
I suppose the title is incorrectly identifying/summarizing the problem. There is nothing to tell for the fact that audio is not produced. It is likely that it is produced equally well with DirectSound Renderer and with AAC Encoder, specifically the data is reaching output pin of Mixing Transform Filter (is this your filter? You should be able to trace its flow and see media samples passing though).
With the information given, I would say it's likely that custom AAC encoder somehow does not like the feed and somehow either drops data or switches to erroneous state. You should be able to debug this further by inserting a Sample Grabber (or alike) filter¹ before the AAC encoder and tracing the media samples. Also comparing them to data from another source. The encoder might be sensitive to small details like media sample duration or discontinuity flag on the first sample streamed.
¹ With GraphStudioNext (GraphEdit makes no longer sense compared to) you can use internal Analyzer Filter and review media sample flow interactively using filter property page.
In what language can I write a quick program to take screenshots and also possibly emulate a keypress?
I have an animated/interactive flash movie that is a presentation. I want to take a screenshot after I press a particular key.
The end effect is a bunch of screenshots that I can print...basically captures the key moments in the flash presentation.
I've written this in C# without much hassle. Here's the bulk of the code:
using (Bitmap bitmap = new Bitmap(bitmapSize.Width, bitmapSize.Height, PixelFormat.Format24bppRgb))
using (Graphics graphics = Graphics.FromImage(bitmap))
{
graphics.CopyFromScreen(
new Point(0, 0),
new Point(0, 0),
bitmapSize);
bitmap.Save(filename, ImageFormat.Png);
}
I would recommend writing an app that hosts a browser control. Then you could have the browser control show the SWF and your app would know the exact coordinates of the part of the screen you need to capture. That way you can avoid having to capture a whole screen or whole window that you may have to crop later.
i am sure there are ways, but here's my idea. you can convert your movie frames to pictures using tools like ffmpeg . From the man page of ffmpeg
ffmpeg -i foo.avi -r 1 -s WxH -f image2 foo-%03d.jpeg
This will extract one video frame per second from the video and will output them in files named foo-001.jpeg, foo-002.jpeg, etc.
Images will be rescaled to fit the new WxH values.
If you want to extract just a limited number of frames, you can use the above command in combination with the -vframes or -t option,
or in combination with -ss to start extracting from a certain point in time.
The number in the file name "simulates" the key press, so if you extracted for 1 sec per frame, and you want to "press" the key at 30sec, use the file name with foo-030.jpeg
There's a free tool that I found about recently that does the screen capture part, It's apparently written in java.
http://screenr.com/