I am working with Xamarin.iOS. I use UIImagePickerController to record video . Now I get the filePath of the video in sandBox . But I want to calculate the size of the video before I upload(over 20M will be banned) . I'm not familiar with native iOS(OC and Swift).I can only get the duration. So , how to get the size(in MB) of the video?
You can use the Class NSFileManager .Try to refer to the following code:
public double GetFileSize(NSString filepath)
{
NSFileManager fileManager = NSFileManager.DefaultManager;
double filesize = -1.0;
if (fileManager.FileExists(filepath))
{
filesize = (double)fileManager.GetAttributes(filepath).Size;
return filesize / (1024 * 1024); // return the size as MB
}
else
{
Console.Write("file can not be found");
return 0;
}
}
Related
I am using the following sample to resize the uploaded images with Blazor WebAssembly
https://www.prowaretech.com/Computer/Blazor/Examples/WebApi/UploadImages .
Still I need the original file too to be converted to base64 too and I don't know how can I access it...
I tried to find the file's original width and height to pass its to RequestImageFileAsync function but no success...
I need to store both files : the original one and the resized one.
Can you help me, please ?
Thank You Very Much !
The InputFile control emits an IBrowserFile type. RequestImageFileAsync is a convenience method on IBrowserFile to resize the image and convert the type. The result is still an IBrowserFile.
One way to do what you are asking is with SixLabors.ImageSharp. Based on the ProWareTech example, something like this...
async Task OnChange(InputFileChangeEventArgs e)
{
var files = e.GetMultipleFiles(); // get the files selected by the users
foreach(var file in files)
{
//Original-sized file
var buf1 = new byte[file.Size];
using (var stream = file.OpenReadStream())
{
await stream.ReadAsync(buf1); // copy the stream to the buffer
}
origFilesBase64.Add(new ImageFile { base64data = Convert.ToBase64String(buf1), contentType = file.ContentType, fileName = file.Name }); // convert to a base64 string!!
//Resized File
var resizedFile = await file.RequestImageFileAsync(file.ContentType, 640, 480); // resize the image file
var buf = new byte[resizedFile.Size]; // allocate a buffer to fill with the file's data
using (var stream = resizedFile.OpenReadStream())
{
await stream.ReadAsync(buf); // copy the stream to the buffer
}
filesBase64.Add(new ImageFile { base64data = Convert.ToBase64String(buf), contentType = file.ContentType, fileName = file.Name }); // convert to a base64 string!!
}
//To get the image Sizes for first image
ImageSharp.Image origImage = Image.Load<*imagetype*>(origFilesBase64[0])
int origImgHeight = origImage.Height;
int origImgWidth = origImage.Width;
ImageSharp.Image resizedImage = Image.Load<*imagetype*>(filesBase64[0])
int resizedImgHeight = resizedImage.Height;
int resizedImgWidth = resizedImage.Width;
}
How to convert SoftwareBitmap from Bgra8 to JPEG in Windows UWP. GetPreviewFrameAsync function is used to get videoFrame data in Bgra8. What is going wrong in the following code?. I am getting jpeg size 0.
auto previewProperties = static_cast<MediaProperties::VideoEncodingProperties^>
(mediaCapture->VideoDeviceController->GetMediaStreamProperties(Capture::MediaStreamType::VideoPreview));
unsigned int videoFrameWidth = previewProperties->Width;
unsigned int videoFrameHeight = previewProperties->Height;
FN_TRACE("%s videoFrameWidth %d videoFrameHeight %d\n",
__func__, videoFrameWidth, videoFrameHeight);
// Create the video frame to request a SoftwareBitmap preview frame
auto videoFrame = ref new VideoFrame(BitmapPixelFormat::Bgra8, videoFrameWidth, videoFrameHeight);
// Capture the preview frames
return create_task(mediaCapture->GetPreviewFrameAsync(videoFrame))
.then([this](VideoFrame^ currentFrame)
{
// Collect the resulting frame
auto previewFrame = currentFrame->SoftwareBitmap;
auto inputStream = ref new Streams::InMemoryRandomAccessStream();
create_task(BitmapEncoder::CreateAsync(BitmapEncoder::JpegEncoderId, inputStream))
.then([this, previewFrame, inputStream](BitmapEncoder^ encoder)
{
encoder->SetSoftwareBitmap(previewFrame);
encoder->FlushAsync();
FN_TRACE("jpeg size %d\n", inputStream->Size);
Streams::Buffer^ data = ref new Streams::Buffer(inputStream->Size);
create_task(inputStream->ReadAsync(data, (unsigned int)inputStream->Size, InputStreamOptions::None));
});
});
BitmapEncoder.FlushAsync() method is a asynchronous method. We should consume it like the following:
// Capture the preview frames
return create_task(mediaCapture->GetPreviewFrameAsync(videoFrame))
.then([this](VideoFrame^ currentFrame)
{
// Collect the resulting frame
auto previewFrame = currentFrame->SoftwareBitmap;
auto inputStream = ref new Streams::InMemoryRandomAccessStream();
return create_task(BitmapEncoder::CreateAsync(BitmapEncoder::JpegEncoderId, inputStream))
.then([this, previewFrame](BitmapEncoder^ encoder)
{
encoder->SetSoftwareBitmap(previewFrame);
return encoder->FlushAsync();
}).then([this, inputStream]()
{
FN_TRACE("jpeg size %d\n", inputStream->Size);
//TODO
});
});
Then you should be able to get the right size. For more info, please see Asynchronous programming in C++.
I am reading color frame from Kinect V2 sensor using Microsoft Kinect SDK v2. I am copying the frame data in a byte array, which is later on converted into EmguCV Image. Below is the snippet from the code-
// A pixel buffer to hold image data from the incoming color frame
private byte[] pixels = null;
private KinectSensor kinectSensor = null;
private ColorFrameReader colorFrameReader = null;
public KinectForm()
{
this.kinectSensor = KinectSensor.GetDefault();
this.colorFrameReader = this.kinectSensor.ColorFrameSource.OpenReader();
this.colorFrameReader.FrameArrived += this.Reader_ColorFrameArrived;
// create the colorFrameDescription from the ColorFrameSource using Bgra format
FrameDescription colorFrameDescription = this.kinectSensor.ColorFrameSource.CreateFrameDescription(ColorImageFormat.Bgra);
// Create a pixel buffer to hold the frame's image data as a byte array
this.pixels = new byte[colorFrameDescription.Width * colorFrameDescription.Height * colorFrameDescription.BytesPerPixel];
// open the sensor
this.kinectSensor.Open();
InitializeComponent();
}
private void Reader_ColorFrameArrived(object sender, ColorFrameArrivedEventArgs e)
{
using (ColorFrame colorFrame = e.FrameReference.AcquireFrame())
{
if (colorFrame != null)
{
FrameDescription colorFrameDescription = colorFrame.FrameDescription;
if (colorFrame.RawColorImageFormat == ColorImageFormat.Bgra)
colorFrame.CopyRawFrameDataToArray(pixels);
else
colorFrame.CopyConvertedFrameDataToArray(this.pixels, ColorImageFormat.Bgra);
//Initialize Emgu CV image then assign byte array of pixels to it
Image<Bgr, byte> img = new Image<Bgr, byte>(colorFrameDescription.Width, colorFrameDescription.Height);
img.Bytes = pixels;
imgBox.Image = img;//Show image in Emgu.CV.UI.ImageBox
}
}
}
The converted image is corrupted after zooming more than 25%. Please see below screenshots-
50% Zoom -
25% Zoom -
12.5% Zoom -
The < TColor> should be Bgra, you can confirm from the "pixels" bytes array.
edit:
As the intensity of the Kinect's color frame is 0, the image won't be visible, so I added this code to solve the issue, I hope there is a better (faster) way to solve it.
private void FixIntensity(byte[] p)
{
int i = 0;
while (i < p.Length)
{
p[i+3] = 255;
i += 4;
}
}
in my app i have a list of picture. I would like to determine the size of the single images but not have property.
using (MediaLibrary library = new MediaLibrary())
{
CameraRollAlbum = library.RootPictureAlbum.Albums.First((album) => album.Name == "Camera Roll");
List<Picture> pictures = CameraRollAlbum.Pictures.Tolist();
foreach (Picture pic in pictures)
{
pic.size?? pic.length??
}
}
Get the Image Stream eval the size:
foreach (Picture pic in pictures)
{
var sizeInKb = pic.GetImage().Length / 1024;
}
http://msdn.microsoft.com/en-us/library/microsoft.xna.framework.media.picture_members.aspx
I'm trying to ascertain wither there is a limitation on the camera access in the j2me implementation on the HTC Touch2. The native camera is 3MP however it seams that the quality is notably reduced when accessed via j2me, in fact it seams that the only size and format the .getSnapshot() method is able to return is a 240x320 pixel jpeg. I'm trying to confirm that this is a limitation if the j2me implementation and not my coding. Hears and example of some of the things I have tried:
private void showCamera() {
try {
mPlayer = Manager.createPlayer("capture://video");
// mPlayer = Manager.createPlayer("capture://video&encoding=rgb565&width=640&height=480");
mPlayer.realize();
mVideoControl = (VideoControl)mPlayer.getControl("VideoControl");
canvas = new CameraCanvas(this, mVideoControl);
canvas.addCommand(mBackCommand);
canvas.addCommand(mCaptureCommand);
canvas.setCommandListener(this);
mDisplay.setCurrent(canvas);
mPlayer.start();
}
catch (Exception ex) {}
}
public void capture() {
try {
// Get the image.
byte[] raw = mVideoControl.getSnapshot("encoding=jpeg&quality=100&width=640&height=480");
// byte[] raw = mVideoControl.getSnapshot("encoding=png&quality=100&width=
// 640&height=480");
// byte[] raw = mVideoControl.getSnapshot(null);
Image image = Image.createImage(raw, 0, raw.length);
// Image thumb = createThumbnail(image);
// Place it in the main form.
if (mMainForm.size() > 0 && mMainForm.get(0) instanceof StringItem)
mMainForm.delete(0);
mMainForm.append(image);
If anyone could help it would be much appreciated.
I have reseved word from a number of sources that there is indeed a limitation on the camera access the JVM has witch is put in place by the operating system.