Is it possible without much effort (e.g. in command line or powershell), to create a named pipe?
My goal is to write continously to that pipe from a ffmpeg-process.
Without opening a Pipe at first the following command
ffmpeg -i "path\to\my\File\name of my File" -f webm \\.\pipe\from_ffmpeg
fails to
"\\.\pipe\from_ffmpeg: No such file or directory"
In the big picture, I want to read a Live-Web-Video-Stream to analyze it and take live-actions based on that.
I am working with OpenCV in Java on a Windows machine. At the moment I have different ffmpeg-processes, which record different sectors (i.e. pixels (45, 45, 100, 100) and (200, 200, 100, 100) (x, y, height, width). The results are saved as jpg files in the filesystem and are then opened in a Java Process. This works, but I think I would significantly gain performance by not taking the long way over the files but to directly pipe my input into the Java-Process.
I know there's an option to live-capture videostreams via open-CV but the framework does not support as many formats as ffmpeg does.
Related
I am trying to achieve a way to send large video files through Firefox Send.
Because Firefox Send has a 2.5 GB limit per file that one sends, I need to break up a video file into parts that are each less than 2.5GB.
Is there a relatively simple way to reliably split a video based on data limits using FFmpeg, rather than using duration? (Using duration would be unreliable, because different equal length portions of a video can be different sized)
EDIT 1: I apoligize for the lack of clarity, I was planning on using a Bash script using FFmpeg and ffsend. I was wondering if there is any way to do this through video processing rather than zip compression.
The standard utility split is intended for precisely this sort of thing.
# sender does:
split -b 2500m file.mpg file.mpg__split_
# recipient downloads all the pieces and does:
cat file.mpg__split_* > file.mpg
A disadvantage of this procedure is that the individual parts are not usable.
An advantage is that the final output is identical to the original.
I want to be able to read/stream a video with Ruby, on Mac, and be able to get access to the pixel data for each frame.
What I've tried
https://github.com/streamio/streamio-ffmpeg
It's good at splitting the video into frames, but I don't know how to get access to the pixel data without saving each frame as an image first (or if it's possible at all).
require 'streamio-ffmpeg'
movie = FFMPEG::Movie.new("flakes.mp4")
movie.screenshot("screenshots/screenshot_%d.jpg", { custom: %w(-vf crop=60:60:10:10), vframes: (movie.duration).to_i, frame_rate: movie.frame_rate/24 }, { validate: false })`
https://github.com/wedesoft/hornetseye-ffmpeg
This seemed to have so much potential, but I don't think it's maintained anymore, and it's not meant to be used on MacOS really, so I'm having issues installing it there (headers not found and such, and no way to configure it afaik).
Any idea what tool or method I could use for this use case?
If you have ffmpeg available (streamio-ffmpeg just wraps command line calls to ffmpeg), you can create a named pipe with File.mkfifo and have ffmpeg write its screenshots to that pipe.
Then open the pipe in Ruby like you would any normal file and you can read the screenshot images directly from ffmpeg without them being transferred to/from disk. Parse the pixel data with the Ruby gem of your choice.
I'm trying to find a way to sort videos by their bytes/second (b/s) ratio. I don't mean the b/s rates which one can set when rendering videos, but the actual "how big is this file" divided by "how long is the video" ratio.
The videos are in different folders (all contained in one parent folder) and I don't want to change their location with the sorting. I want a descending list with the filename, optionally the path to that file and the ratio of b/s; commandline-output would be fine.
Is there any way to do this in Windows natively? I assume there isn't, so my question is rather: How would one do that? My best guess is to try to write a .bat script for that but there might also be programs for something like that already.
Ok, this seems quite easy to do by getting the bitrate of the files via ffmpeg
FFMPEG - batch extracting media duration and writing to a text file
Now I used C language and ffmpeg realize a multiplex real-time audio and video to MP4 files of the program and everything works fine, but when in the process of reuse of sudden power failure, the recording is MP4 file is damaged, VLC can not play this file.
I think reason is no call to write the trailer function av_write_trailer , causing index and time stamp information lost, I use araxis merge tool compared the successful call av_write_trailer function of file and a no av_write_trailer to call the damaged files and found two different points:
1. Damaged files in the file header box number value not right
2. The damaged file no end of file.
Now I want to repair after power on my program can automatically repair the damaged files, in Google did not find effective methods.
my train of thought is in the normal recording process saves per second a damaged file is missing two information: box number and end of file, save it to a local file, when writing the MP4 file integrity delete this file after, if power off damaged, then in the next power, read the file and the corresponding information to write the damaged files corresponding position to. But now the problem is that I don't know how to save the number of box and the end of the file, I this is feasible? If possible, what should I do? Looking forward to your reply!
The main cause of MP4 file damage is due to header or trailer not written properly on the file , then , whole file become a junk data. Thus none of the media player able to play the broken mp4 file.
So,
First , broken file has to be repaired before playing the file.
there are some applications and tricks available to repair and get the data back
links are given below :
http://grauonline.de/cms2/?page_id=5 (Windows / Mac)(paid :( )
https://github.com/ponchio/untrunc (Linux based OS)(ofcourse,free!!!)
Second, Manually repairing the corrupt file using HEX editor.
Logic behind this hack :
This hack requires a broken mp4 file and good video file where both videos are captured from the same camera .Also its size should be larger than the broken mp4 file.
Open both video file in any HEX editor. Copy trailer part from good video file to broken video file and save it!Done!!
Note : Always have a backup of video file.
follow these links for detailed informations :
http://janit.iki.fi/repair-corrupted-mp4-video/
https://www.lfs.net/forum/thread/45156-Repair-a-corrupt-mp4-file%3F
http://hackaday.com/2015/04/02/manual-data-recovery-with-a-hex-editor/
http://www.hexview.org/hex-repair-corrupt-file.html
Third, Even tough MP4 file has many advantages , this kind of error is unpredictable and difficult to handle it.
Thus , Using format such as MPG and AV_CODEC_ID_MPEG1VIDEO/AV_CODEC_ID_MPEG2VIDEO (FFMPEG) may help to avoid this kind of error. The mentioned MPG format does not require any header/trailer.if there is any sudden power failure MPG file can play the file whatever frames are stored so far.
Note : there are other formats and codec also available with this kind of properties.
I have a C# app. I have 100 JPEGs (for an example).
I can easily encode this into a video file.
When it has finished encoding on the client app I FTP upload to my server.
It would be 'neater' if I could not write the video file to a disc but instead write it to a memory stream (or array of bytes) and upload via web service perhaps?
I have checked out the ffmpeg documentation but as a C# developer I do not find it natural to understand the C examples.
I guess my first question is it possible, and if so can anyone post an example code in C#? At the moment I am using the process class in C# to execute ffmpeg.
Your process method is already good, just needs adjustments:
Set StartupInfo.RedirectStandardOutput = true and StartupInfo.UseShellExecute = false.
Instead of an output file name, call ffmpeg with pipe:, which will make it write to the standard output. Also, since the format cannot be determined from the file name anymore, make sure you use the -f <format> switch as well.
Start the process.
Read from Process.StandardOutput.BaseStream (.BaseStream, so the StreamReader that is .StandardOutput doesn't mess anything up) while the process is running into your memory stream.
Read anything still remaining buffered in Process.StandardOutput.BaseStream.
Profit.
I coded a thumbnailer a while back (BSD 2-clause), that has actual code that demonstrates this. Doesn't matter if it is an image or a video coming out of ffmpeg in the end.