Kodak Playsport camera HDMI output to Canopus ADVC HD50. (works okay for iMovie)
Need source code for a simple program similar to Apples' "MyRecorder" tutorial that demonstrates video capture, and then a report of what resolution it is.
Currently my program reports: "The operation couldn’t be completed. (OSStatus error -8961.)". Make sure that the formats of all video outputs are properly configured."
The answer for me is to purchase Calibrated {Q}'s XD Decode codec.
Related
My goal is to work out why a given video file does not play on Macos/Safari/Quicktime.
The background to this question is that it is possible to play HEVC videos with a transparent background/alpha channel on Safari/MacOS. To be playable, a video must meet the specific requirements set out by Apple in this document:
https://developer.apple.com/av-foundation/HEVC-Video-with-Alpha-Interoperability-Profile.pdf
The video that does not play on Apple/Safari/Quicktime is an HEVC video with an alpha transparency channel. Note that VLC for MacOS DOES play this file. Here it is:
https://drive.google.com/file/d/1ZnXjcDbk-_YxTgRuH_D7RSR9SXdY_XTv/view?usp=share_link
I have two example HEVC video files with a transparent background/alpha channel, and they both play fine using either Quicktime player or Safari:
Working video #1:
https://drive.google.com/file/d/1PJAyg_sVKVvb-Py8PAu42c1qm8l2qCbh/view?usp=share_link
Working video #2:
https://drive.google.com/file/d/1kk8ssUyT7qAaK15afp8VPR6mIWPFX8vQ/view?usp=sharing
The first step is to work out in what way my non-working video ( https://drive.google.com/file/d/1ZnXjcDbk-_YxTgRuH_D7RSR9SXdY_XTv/view?usp=share_link ) does not comply with the specification.
Once it is clear which requirements are not met by the non-working video then I can move onto the next phase, which is to try to formulate an ffmpeg command that will output a video meeting the requirements.
I have read Apples requirements document and I am out of my depth in trying to analyse the non working video against the requirements - I don't know how to do it.
Can anyone suggest a way to identify what is wrong with the video?
Additional context is that I am trying to find a way to create Apple/MacOS compatible alpha channel / transparent videos using ffmpeg with hevc_nvenc running on an Intel machine. I am aware that Apple hardware can create such videos, but for a wide variety of reasons it is not practical for me to use Apple hardware to do the job. I have spent many hours trying all sorts of ffmpeg and ffprobe commands to try to work out what is wrong and modify the video to fix it, but to be honest most of my attempts are guesswork.
The Apple specification for an alpha layer in HEVC requires that the encoder process and store the alpha in a certain manner. It also requires that the stream configuration syntax be formed in a specific manner. At time of writing, I'm aware of only the videotoolbox HEVC encoder being capable of emitting such a stream.
I'm trying to obtain playback video streams from some Axis and Hikvision cameras, using Onvif.
I'm doing this in a C# application, and the resulted stream is played in VLC.
Using the FindRecordings/GetRecordingSearchResult calls and then GetReplayUri I can obtain the playback stream (RTSP/H264), but here I have this problem: this behaves like a live stream - I can only use play and pause. I cannot use the time cursor to seek, cannot play in reverse.
So I find this unusable for a playback application - you have to watch the entire recording (days or hours of recording!) in order to see a specific event in time. And once you play it, you cannot go back 1 minute to see it again.
This seems quite stupid to me, so I believe that I'm doing something wrong in my code. Maybe I'm missing some configuration in order to obtain a 'true' playback stream.
My question is: is this playback stream behavior the 'standard' one, and I cannot expect more on this? Or some of you have this working ok (seek, reverse, frame by frame stepping), so I will know it can be done.
Thank you.
Reverse playback is possible, but it is not easy. First, the reverse replay is initiated using the Scale header field with a negative value. As an example:
PLAY rtsp://192.168.0.1/path/to/recording RTSP/1.0
Cseq: 123
Session: 12345678
Require: onvif-replay
Range: clock=20090615T114900.440Z-
Rate-Control: no
Scale: -1.0
After the stream is initialized, you will get GOPs in reverse order, not just reversed frames. I don't know if VLC supports this way of operating.
Be aware that only devices with the ReversePlayback capability support reverse playback.
Please refer to the streaming specification for further details.
This is not a real solution to the problem above, but maybe it would help others to deal with this situation.
Some cameras with which I worked were continuously recording on the same video file (so the time range was not known) and they were reporting (via RTSP) the available time interval like this:
range:npt=0-
Due to this, VLC was not displaying any time interval in the time slider, so it was not
allowing for seek. In my case, it was a requirement to use VLC, so I had to find a workaround to the problem.
This was a module which was acting like a proxy, and it sit between VLC and the RTSP source (camera). So all RTSP traffic between VLC and camera was going via this module which I controlled, so I could easily change the responses from camera in a way which was ok for VLC, so I got the seek capability available in VLC.
The 7160 Capture card original video was shown fine in the Honestech HD DVR software that is included.
However, when the card was captured using ffmpeg and publish out. This error occurred after a while running ffmpeg:
real-time buffer [7160 HD Capture] video input too full or near too full ...
I have already set -rtbufsize 2000M which is nearly the maximum that is allowed and can not be increased further.
Please tell me how to resolve this bug or give me an example that can be used without producing this bug. Thank you very much. You do not neeed the code that I used because almost any code even the simplest code I used produced this error after running for a while. The published video also lag and lost.
I am processing a video file.
I use ffmpeg to read each packet.
If it is an audio packet, I write the packet into the output video file using av_interleaved_write_frame.
If it is a video packet, I decode the packet, get the data of the video frame, process the image, and compress back to a packet. Then I write the processed video frame packet into the output video file using av_interleaved_write_frame.
Through debugging, it read audio packets and video packets correctly.
However, when it goes to "av_write_trailer", it exits. But the output video file exists.
The error information is:
*** glibc detected *** /OpenCV_videoFlatten_20130507/Debug/OpenCV_videoFlatten_20130507: corrupted double-linked list: 0x000000000348dfa0 ***
Using Movie Player (in Ubuntu), the output video file can plays the audio correctly, but without video signals.
Using VLC player, it can show the first video frame (keep the same video picture), and play the audio correctly.
I tried to debug into "av_write_trailer", but since it is in the ffmpeg library, I could not get a detailed information what is wrong.
Another piece of information: the previous version of the project is only to process the video frame, without adding audio stream; and it works well.
Any hint or clue?
I found the solution. I did not use rescale to set the pts based on stream's time_base. Actually the related code is in the example muxing.c.
We are developing an audio recording application for 10.7.5 and above.
We are using AVCaptureAudioDataOutput to capture audio and data is written using AVAssetWriterInput.
Properties set to the AVAssetWriterInput as below:
AVFormatIDKey : kAudioFormatMPEG4AAC
AVSampleRateKey : This values is taken from device using below code
AudioStreamBasicDescription streanDesc;
UInt32 propSize = sizeof(AudioStreamBasicDescription);
AudioDeviceGetProperty(mInputDevice, 0, YES, kAudioDevicePropertyStreamFormat, &propSize,&streanDesc);
AVNumberOfChannelsKey : This is taken from AudioStreamBasicDescription.
AVEncoderBitRateKey : This is hardcoded to 96000.
This works fine for most of the audio device except for USB mic of sample rate 32khz and iSight device with sample rate 48khz.
When we use these two devices as input audio device, while writing audio data we are getting the following error -
Error Domain=AVFoundationErrorDomain Code=-11821 "Cannot Decode" UserInfo=0x279470 {NSLocalizedFailureReason=The media data could not be decoded. It may be damaged., NSUnderlyingError=0x292370 "The operation couldn’t be completed. (OSStatus error 560226676.)", NSLocalizedDescription=Cannot Decode}".
However in case of USB mic if we hardcode the AVSampleRateKey to 44100 it works perfectly fine but does not work for iSight device.
What is correct value to be provided for AVSampleRateKey? Can any one help me to resolve this issue.