NReco video cut - ffmpeg

I have written a function to cut a video using NReco library.
public void SplitVideo(string SourceFile,string DestinationFile,int StartTime,int EndTime)
{
var ffMpegConverter = new FFMpegConverter();
ffMpegConverter.ConvertMedia(SourceFile, null, DestinationFile, null,
new ConvertSettings()
{
Seek = StartTime,
MaxDuration = (EndTime-StartTime), // chunk duration
VideoCodec = "copy",
AudioCodec = "copy"
});
}
This is working and give me a video starting from the beginning of the video to the max duration i have assign. It is not starting from the seek value position and to the max duration. Can some one help me on this.

I have found the answer for this issue. May this help someone.
I was using worong codecs. You have to use correct codec type according to the file type you are converting. here i am using a mp4 file. So i had to use
libx264 and mp3. Beelow is the sample code
public void SplitVideo(string SourceFile,string DestinationFile,int StartTime,int EndTime)
{
var ffMpegConverter = new FFMpegConverter();
ffMpegConverter.ConvertMedia(SourceFile, null, DestinationFile, null,
new ConvertSettings()
{
Seek = StartTime,
MaxDuration = (EndTime-StartTime), // chunk duration
VideoCodec = "libx264",
AudioCodec = "mp3"
});
}

Related

Resize and Upload Images with Blazor WebAssembly

I am using the following sample to resize the uploaded images with Blazor WebAssembly
https://www.prowaretech.com/Computer/Blazor/Examples/WebApi/UploadImages .
Still I need the original file too to be converted to base64 too and I don't know how can I access it...
I tried to find the file's original width and height to pass its to RequestImageFileAsync function but no success...
I need to store both files : the original one and the resized one.
Can you help me, please ?
Thank You Very Much !
The InputFile control emits an IBrowserFile type. RequestImageFileAsync is a convenience method on IBrowserFile to resize the image and convert the type. The result is still an IBrowserFile.
One way to do what you are asking is with SixLabors.ImageSharp. Based on the ProWareTech example, something like this...
async Task OnChange(InputFileChangeEventArgs e)
{
var files = e.GetMultipleFiles(); // get the files selected by the users
foreach(var file in files)
{
//Original-sized file
var buf1 = new byte[file.Size];
using (var stream = file.OpenReadStream())
{
await stream.ReadAsync(buf1); // copy the stream to the buffer
}
origFilesBase64.Add(new ImageFile { base64data = Convert.ToBase64String(buf1), contentType = file.ContentType, fileName = file.Name }); // convert to a base64 string!!
//Resized File
var resizedFile = await file.RequestImageFileAsync(file.ContentType, 640, 480); // resize the image file
var buf = new byte[resizedFile.Size]; // allocate a buffer to fill with the file's data
using (var stream = resizedFile.OpenReadStream())
{
await stream.ReadAsync(buf); // copy the stream to the buffer
}
filesBase64.Add(new ImageFile { base64data = Convert.ToBase64String(buf), contentType = file.ContentType, fileName = file.Name }); // convert to a base64 string!!
}
//To get the image Sizes for first image
ImageSharp.Image origImage = Image.Load<*imagetype*>(origFilesBase64[0])
int origImgHeight = origImage.Height;
int origImgWidth = origImage.Width;
ImageSharp.Image resizedImage = Image.Load<*imagetype*>(filesBase64[0])
int resizedImgHeight = resizedImage.Height;
int resizedImgWidth = resizedImage.Width;
}

How to add "dmb1" four char code in ffmpeg?

I'm trying to stream a video from webcam to the local computer. The stream has resolution of 3840x2160 and 30fps. Computer I'm using is Mac Pro. However when I run it with next command:
ffmpeg -f avfoundation -framerate 30 -video_size 3840x2160 -pix_fmt nv12 -probesize "50M" -i "0" -pix_fmt nv12 -preset ultrafast -vcodec libx264 -tune zerolatency -f mpegts udp://192.168.1.5:5100/mystream
it has a latency of 3-4 seconds. This problem is not present in Chromium, when using MediaStream API stream is displayed in realtime.
I believe that's because Chromium has "dmb1" four char code supported:
+ (media::VideoPixelFormat)FourCCToChromiumPixelFormat:(FourCharCode)code {
switch (code) {
case kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange:
return media::PIXEL_FORMAT_NV12; // Mac fourcc: "420v".
case kCVPixelFormatType_422YpCbCr8:
return media::PIXEL_FORMAT_UYVY; // Mac fourcc: "2vuy".
case kCMPixelFormat_422YpCbCr8_yuvs:
return media::PIXEL_FORMAT_YUY2;
case kCMVideoCodecType_JPEG_OpenDML:
return media::PIXEL_FORMAT_MJPEG; // Mac fourcc: "dmb1".
default:
return media::PIXEL_FORMAT_UNKNOWN;
}
}
To set pixel format Chromium is using next piece of code:
NSDictionary* videoSettingsDictionary = #{
(id)kCVPixelBufferWidthKey : #(width),
(id)kCVPixelBufferHeightKey : #(height),
(id)kCVPixelBufferPixelFormatTypeKey : #(best_fourcc),
AVVideoScalingModeKey : AVVideoScalingModeResizeAspectFill
};
[_captureVideoDataOutput setVideoSettings:videoSettingsDictionary];
I tried doing the same thing in ffmpeg by changing avfoundation.m file. First I added new pixel format AV_PIX_FMT_MJPEG:
static const struct AVFPixelFormatSpec avf_pixel_formats[] = {
{ AV_PIX_FMT_MONOBLACK, kCVPixelFormatType_1Monochrome },
{ AV_PIX_FMT_RGB555BE, kCVPixelFormatType_16BE555 },
{ AV_PIX_FMT_RGB555LE, kCVPixelFormatType_16LE555 },
{ AV_PIX_FMT_RGB565BE, kCVPixelFormatType_16BE565 },
{ AV_PIX_FMT_RGB565LE, kCVPixelFormatType_16LE565 },
{ AV_PIX_FMT_RGB24, kCVPixelFormatType_24RGB },
{ AV_PIX_FMT_BGR24, kCVPixelFormatType_24BGR },
{ AV_PIX_FMT_0RGB, kCVPixelFormatType_32ARGB },
{ AV_PIX_FMT_BGR0, kCVPixelFormatType_32BGRA },
{ AV_PIX_FMT_0BGR, kCVPixelFormatType_32ABGR },
{ AV_PIX_FMT_RGB0, kCVPixelFormatType_32RGBA },
{ AV_PIX_FMT_BGR48BE, kCVPixelFormatType_48RGB },
{ AV_PIX_FMT_UYVY422, kCVPixelFormatType_422YpCbCr8 },
{ AV_PIX_FMT_YUVA444P, kCVPixelFormatType_4444YpCbCrA8R },
{ AV_PIX_FMT_YUVA444P16LE, kCVPixelFormatType_4444AYpCbCr16 },
{ AV_PIX_FMT_YUV444P, kCVPixelFormatType_444YpCbCr8 },
{ AV_PIX_FMT_YUV422P16, kCVPixelFormatType_422YpCbCr16 },
{ AV_PIX_FMT_YUV422P10, kCVPixelFormatType_422YpCbCr10 },
{ AV_PIX_FMT_YUV444P10, kCVPixelFormatType_444YpCbCr10 },
{ AV_PIX_FMT_YUV420P, kCVPixelFormatType_420YpCbCr8Planar },
{ AV_PIX_FMT_NV12, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange },
{ AV_PIX_FMT_YUYV422, kCVPixelFormatType_422YpCbCr8_yuvs },
{ AV_PIX_FMT_MJPEG, kCMVideoCodecType_JPEG_OpenDML }, //dmb1
#if !TARGET_OS_IPHONE && __MAC_OS_X_VERSION_MIN_REQUIRED >= 1080
{ AV_PIX_FMT_GRAY8, kCVPixelFormatType_OneComponent8 },
#endif
{ AV_PIX_FMT_NONE, 0 }
};
After that I tried to hardcode it:
pxl_fmt_spec = avf_pixel_formats[22];
ctx->pixel_format = pxl_fmt_spec.ff_id;
pixel_format = [NSNumber numberWithUnsignedInt:pxl_fmt_spec.avf_id];
capture_dict = [NSDictionary dictionaryWithObject:pixel_format
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
[ctx->video_output setVideoSettings:capture_dict];
Code compiles and builds successfully, but when I run it with above command, without -pix_fmt specified, program enters infinite loop in get_video_config function:
while (ctx->frames_captured < 1) {
CFRunLoopRunInMode(kCFRunLoopDefaultMode, 0.1, YES);
}
It looks obvious that ffmpeg is not able to load first frame. My camera is more than capable of supporting this pixel and stream formats. I proved it with this piece of code which comes after ffmpeg selects which format to use for specified width, height and fps:
FourCharCode fcc = CMFormatDescriptionGetMediaSubType([selected_format formatDescription]);
char fcc_string[5] = { 0, 0, 0, 0, '\0'};
fcc_string[0] = (char) (fcc >> 24);
fcc_string[1] = (char) (fcc >> 16);
fcc_string[2] = (char) (fcc >> 8);
fcc_string[3] = (char) fcc;
av_log(s, AV_LOG_ERROR, "Selected format: %s\n", fcc_string);
Above code prints "Selected format: dmb1".
Can someone tell me why ffmpeg can't load first frame and how to add new pixel format in this library?
Also, any suggestion on how to resolve input latency of 3 seconds in some other way is more than welcome.
EDIT:
If you try setting any other pixel format in Chromium other then MJPEG there is a latency of 2 seconds. When I say "setting" I mean changing Chromium source code and recompiling it. I am pretty sure that the problem is in pixel format, because camera is sending dmb1 and ffmpeg doesn't know about that format.
Also latency is only present on MacOS.

.Audio Timeout Error: NET Core Google Speech to Text Code Causing Timeout

Problem Description
I am a .NET Core developer and I have recently been asked to transcribe mp3 audio files that are approximately 20 minutes long into text. Thus, the file is about 30.5mb. The issue is that speech is sparse in this file, varying anywhere between 2 minutes between a spoken sentence or 4 minutes of length.
I've written a small service based on the google speech documentation that sends 32kb of streaming data to be processed from the file at a time. All was progressing well until I hit this error that I share below as follows:
I have searched via google-fu, google forums, and other sources and I have not encountered documentation on this error. Suffice it to say, I think this is due to the sparsity of spoken words in my file? I am wondering if there is a programmatical centric workaround?
Code
I have used some code that is a slight modification of the google .net sample for 32kb streaming. You can find it here.
public async void Run()
{
var speech = SpeechClient.Create();
var streamingCall = speech.StreamingRecognize();
// Write the initial request with the config.
await streamingCall.WriteAsync(
new StreamingRecognizeRequest()
{
StreamingConfig = new StreamingRecognitionConfig()
{
Config = new RecognitionConfig()
{
Encoding =
RecognitionConfig.Types.AudioEncoding.Flac,
SampleRateHertz = 22050,
LanguageCode = "en",
},
InterimResults = true,
}
});
// Helper Function: Print responses as they arrive.
Task printResponses = Task.Run(async () =>
{
while (await streamingCall.ResponseStream.MoveNext(
default(CancellationToken)))
{
foreach (var result in streamingCall.ResponseStream.Current.Results)
{
//foreach (var alternative in result.Alternatives)
//{
// Console.WriteLine(alternative.Transcript);
//}
if(result.IsFinal)
{
Console.WriteLine(result.Alternatives.ToString());
}
}
}
});
string filePath = "mono_1.flac";
using (FileStream fileStream = new FileStream(filePath, FileMode.Open))
{
//var buffer = new byte[32 * 1024];
var buffer = new byte[64 * 1024]; //Trying 64kb buffer
int bytesRead;
while ((bytesRead = await fileStream.ReadAsync(
buffer, 0, buffer.Length)) > 0)
{
await streamingCall.WriteAsync(
new StreamingRecognizeRequest()
{
AudioContent = Google.Protobuf.ByteString
.CopyFrom(buffer, 0, bytesRead),
});
await Task.Delay(500);
};
}
await streamingCall.WriteCompleteAsync();
await printResponses;
}//End of Run
Attempts
I've increased the stream to 64kb of streaming data to be processed and then I received the following error as can be seen below:
Which, I believe, means the actual api timed out. Which is decidely a step in the wrong direction. Has anybody encountered a problem such as mine with the Google Speech Api when dealing with a audio file with sparse speech? Is there a method in which I can filter the audio down to only spoken words progamatically and then process that? I'm open to suggestions, but my research and attempts have only lead me to further breaking my code.
There is to way for recognize audio in Google Speech API:
normal recognize
long running recognize
Your sample is uses the normal recognize, which has a limit for 15 minutes.
Try to use the long recognize method:
{
var speech = SpeechClient.Create();
var longOperation = speech.LongRunningRecognize( new RecognitionConfig()
{
Encoding = RecognitionConfig.Types.AudioEncoding.Linear16,
SampleRateHertz = 16000,
LanguageCode = "hu",
}, RecognitionAudio.FromFile( filePath ) );
longOperation = longOperation.PollUntilCompleted();
var response = longOperation.Result;
foreach ( var result in response.Results )
{
foreach ( var alternative in result.Alternatives )
{
Console.WriteLine( alternative.Transcript );
}
}
return 0;
}
I hope it helps for you.

How to convert SoftwareBitmap from Bgra8 to JPEG in Windows UWP

How to convert SoftwareBitmap from Bgra8 to JPEG in Windows UWP. GetPreviewFrameAsync function is used to get videoFrame data in Bgra8. What is going wrong in the following code?. I am getting jpeg size 0.
auto previewProperties = static_cast<MediaProperties::VideoEncodingProperties^>
(mediaCapture->VideoDeviceController->GetMediaStreamProperties(Capture::MediaStreamType::VideoPreview));
unsigned int videoFrameWidth = previewProperties->Width;
unsigned int videoFrameHeight = previewProperties->Height;
FN_TRACE("%s videoFrameWidth %d videoFrameHeight %d\n",
__func__, videoFrameWidth, videoFrameHeight);
// Create the video frame to request a SoftwareBitmap preview frame
auto videoFrame = ref new VideoFrame(BitmapPixelFormat::Bgra8, videoFrameWidth, videoFrameHeight);
// Capture the preview frames
return create_task(mediaCapture->GetPreviewFrameAsync(videoFrame))
.then([this](VideoFrame^ currentFrame)
{
// Collect the resulting frame
auto previewFrame = currentFrame->SoftwareBitmap;
auto inputStream = ref new Streams::InMemoryRandomAccessStream();
create_task(BitmapEncoder::CreateAsync(BitmapEncoder::JpegEncoderId, inputStream))
.then([this, previewFrame, inputStream](BitmapEncoder^ encoder)
{
encoder->SetSoftwareBitmap(previewFrame);
encoder->FlushAsync();
FN_TRACE("jpeg size %d\n", inputStream->Size);
Streams::Buffer^ data = ref new Streams::Buffer(inputStream->Size);
create_task(inputStream->ReadAsync(data, (unsigned int)inputStream->Size, InputStreamOptions::None));
});
});
Bitmap​Encoder.FlushAsync() method is a asynchronous method. We should consume it like the following:
// Capture the preview frames
return create_task(mediaCapture->GetPreviewFrameAsync(videoFrame))
.then([this](VideoFrame^ currentFrame)
{
// Collect the resulting frame
auto previewFrame = currentFrame->SoftwareBitmap;
auto inputStream = ref new Streams::InMemoryRandomAccessStream();
return create_task(BitmapEncoder::CreateAsync(BitmapEncoder::JpegEncoderId, inputStream))
.then([this, previewFrame](BitmapEncoder^ encoder)
{
encoder->SetSoftwareBitmap(previewFrame);
return encoder->FlushAsync();
}).then([this, inputStream]()
{
FN_TRACE("jpeg size %d\n", inputStream->Size);
//TODO
});
});
Then you should be able to get the right size. For more info, please see Asynchronous programming in C++.

Xamarin.Android Record Video - Quality Poor

I'm using the following Xamarin tutorial https://developer.xamarin.com/recipes/android/media/video/record_video/
I can successfully record video and audio however the quality is not very good. Can anyone suggest/explain how I can increase the quality please?
I know the device can record in higher quality because the native camera app record in much higher quality.
EDIT here is my code so far
protected override void OnCreate(Bundle savedInstanceState)
{
base.OnCreate(savedInstanceState);
// Set our view from the "main" layout resource
SetContentView(Resource.Layout.RecordVideo);
var record = FindViewById<Button>(Resource.Id.Record);
var stop = FindViewById<Button>(Resource.Id.Stop);
var play = FindViewById<Button>(Resource.Id.Play);
var video = FindViewById<VideoView>(Resource.Id.SampleVideoView);
var videoPlayback = FindViewById<VideoView>(Resource.Id.PlaybackVideoView);
string path = Android.OS.Environment.ExternalStorageDirectory.AbsolutePath + "/test.mp4";
if (Camera.NumberOfCameras < 2)
{
Toast.MakeText(this, "Front camera missing", ToastLength.Long).Show();
return;
}
video.Visibility = ViewStates.Visible;
videoPlayback.Visibility = ViewStates.Gone;
_camera = Camera.Open(1);
_camera.SetDisplayOrientation(90);
_camera.Unlock();
recorder = new MediaRecorder();
recorder.SetCamera(_camera);
recorder.SetAudioSource(AudioSource.Mic);
recorder.SetVideoSource(VideoSource.Camera);
recorder.SetOutputFormat(OutputFormat.Default);
recorder.SetAudioEncoder(AudioEncoder.Default);
recorder.SetVideoEncoder(VideoEncoder.Default);
//var cameraProfile = CamcorderProfile.Get(CamcorderQuality.HighSpeed1080p);
// recorder.SetProfile(cameraProfile);
recorder.SetOutputFile(path);
recorder.SetOrientationHint(270);
recorder.SetPreviewDisplay(video.Holder.Surface);
record.Click += delegate
{
recorder.Prepare();
recorder.Start();
};
stop.Click += delegate
{
if (recorder != null)
{
video.Visibility = ViewStates.Gone;
videoPlayback.Visibility = ViewStates.Visible;
recorder.Stop();
recorder.Release();
}
};
play.Click += delegate
{
video.Visibility = ViewStates.Gone;
videoPlayback.Visibility = ViewStates.Visible;
var uri = Android.Net.Uri.Parse(path);
videoPlayback.SetVideoURI(uri);
videoPlayback.Start();
};
}
I don't see the example specifying the CamcorderProfile anywhere so you might want to start from that. It's possible that the default framerate, bitrate and video frame size are lower than you'd expect. I'm not an a computer right now but try to set the profile to for example QUALITY_1080p using the SetProfile method in MediaRecorder.
You need to set the profile after setting the video and audio sources but before calling SetOutputFile method.

Resources