I am using ASP.NET MVC and I've an action that uploads the file. The file is being uploaded properly. But I want width and height of the image. I think I need to convert the HttpPostedFileBase to Image first and then proceed. How do I do that?
And please let me know if there is another better way to get the width and height of the image.
I use Image.FromStream to as follows:
Image.FromStream(httpPostedFileBase.InputStream, true, true)
Note that the returned Image is IDisposable.
You'll need a reference to System.Drawing.dll for this to work, and Image is in the System.Drawing namespace.
Resizing the Image
I'm not sure what you're trying to do, but if you happen to be making thumbnails or something similar, you may be interested in doing something like...
try {
var bitmap = new Bitmap(newWidth,newHeight);
using (Graphics g = Graphics.FromImage(bitmap)) {
g.SmoothingMode = SmoothingMode.HighQuality;
g.PixelOffsetMode = PixelOffsetMode.HighQuality;
g.CompositingQuality = CompositingQuality.HighQuality;
g.InterpolationMode = InterpolationMode.HighQualityBicubic;
g.DrawImage(oldImage,
new Rectangle(0,0,newWidth,newHeight),
clipRectangle, GraphicsUnit.Pixel);
}//done with drawing on "g"
return bitmap;//transfer IDisposable ownership
} catch { //error before IDisposable ownership transfer
if (bitmap != null) bitmap.Dispose();
throw;
}
where clipRectangle is the rectangle of the original image you wish to scale into the new bitmap (you'll need to manually deal with aspect ratio). The catch-block is typical IDisposable usage inside a constructor; you maintain ownership of the new IDisposable object until it is returned (you may want to doc that with code-comments).
Saving as Jpeg
Unfortunately, the default "save as jpeg" encoder doesn't expose any quality controls, and chooses a terribly low default quality.
You can manually select the encoder as well, however, and then you can pass arbitrary parameters:
ImageCodecInfo jpgInfo = ImageCodecInfo.GetImageEncoders()
.Where(codecInfo => codecInfo.MimeType == "image/jpeg").First();
using (EncoderParameters encParams = new EncoderParameters(1))
{
encParams.Param[0] = new EncoderParameter(Encoder.Quality, (long)quality);
//quality should be in the range [0..100]
image.Save(outputStream, jpgInfo, encParams);
}
If you are sure, that the source is image and doesn't need editing, you can do it easily as described here
[HttpPost]
public void Index(HttpPostedFileBase file)
{
if (file.ContentLength > 0)
{
var filename = Path.GetFileName(file.FileName);
System.Drawing.Image sourceimage =
System.Drawing.Image.FromStream(file.InputStream);
}
}
To secure the file is image, add javascript validation to View by adding accept attribute with MIME type to input tag
<input type="file" accept="image/*">
and jQuery validation script
$.validator.addMethod('accept', function () { return true; });
The whole solution can be found here
Related
I have a form to fill in with data including an image which then goes into a listview. When I click on a button to get an image it works and it goes into the form, however when I click on another button to add it to the listview, the error [System.ObjectDisposedException: 'Cannot access a disposed object.
Object name: 'Stream has been closed'] appears.
Thanks for your help
When I press the add image button :
var ActionPhoto = await DisplayActionSheet("Ajouter une pièce-jointe depuis:", "Annuler", null, "Galerie", "Caméra");
switch (ActionPhoto)
{
case "Galerie":
var Galerie = await MediaPicker.PickPhotoAsync(new MediaPickerOptions { Title = "Choisir une image" });
if (Galerie != null)
{
var voirImageGalerie = await Galerie.OpenReadAsync();
Image_Photos.Source = ImageSource.FromStream(() => voirImageGalerie);
}
break;
case "Caméra":
var camera = await MediaPicker.CapturePhotoAsync();
if (camera != null)
{
var voirImageCamera = await camera.OpenReadAsync();
Image_Photos.Source = ImageSource.FromStream(() => voirImageCamera);
}
break;
}
When I press the add button of the listView:
App.listePosteNoteFrais.Add(new Data{PostePJ = Image_Photos.Source});
In my Data Class:
public ImageSource PostePJ { get; set; }
What I'm adding to my listview:
<Image x:Name="Image_PostePJ" Source="{Binding PostePJ}" HeightRequest="150" WidthRequest="150" Grid.Row="0" Grid.Column="12"/>
Given code:
ImageSource.FromStream(() => voirImageCamera)
The parameter to FromStream:
() => voirImageCamera
is executed every time the image is needed.
The exception message:
System.ObjectDisposedException: 'Cannot access a disposed object. Object name: 'Stream has been closed.
Is telling you that the stream (voirImageCamera) is no longer useable.
I'm not sure what internal code is disposing that stream. Maybe MediaPicker thinks its no longer needed. Or maybe its due to copying from one image source to another. Or something about how/when ListView accesses the image source.
As seen in doc Xamarin.Essentials: Media Picker / General Usage, a safe way to use a result from MediaPicker's OpenReadAsync, is to save the stream in a local file, then use that file as the image source:
// save the file into local storage
var newFile = Path.Combine(FileSystem.CacheDirectory, photo.FileName);
using (var stream = await photo.OpenReadAsync())
using (var newStream = File.OpenWrite(newFile))
await stream.CopyToAsync(newStream);
Then set the image source to that file:
Image_Photos.Source = ImageSource.FromFile(newFile);
The advantage of FromFile, is that it should be able to open the file at any time it needs - there is no stream being held open.
NOTE: The doc example uses CacheDirectory. Depending on the situation, FileSystem.AppDataDirectory might be more appropriate (files that should be kept indefinitely).
I am pretty new to Unity, so this might be an easy one.
I am trying to load and image from a URL an into an image in my application. In my application I have many different images, but for some reason all the images change to the image loaded from my url.
I have made a component called LoadImage and added it only to the one image I want to change. My code for loading the image looks like this:
public class LoadImage : MonoBehaviour
{
public Image img;
// Use this for initialization
void Start ()
{
DownloadViaURL();
}
void DownloadViaURL()
{
Debug.Log("Called DownloadViaURL");
FirebaseDatabase.DefaultInstance
.GetReference("Child1").Child("Child2").Child("ImageURL")
.GetValueAsync().ContinueWith(task =>
{
Debug.Log("Default Instance entered");
if (task.IsFaulted)
{
Debug.Log("Error retrieving data from server");
}
else if (task.IsCompleted)
{
DataSnapshot snapshot = task.Result;
string data_URL = snapshot.GetValue(true).ToString();
//Start coroutine to download image
StartCoroutine(AccessURL(data_URL));
}
});
}
IEnumerator AccessURL(string url)
{
using (WWW www = new WWW(url))
{
yield return www;
www.LoadImageIntoTexture(img.mainTexture as Texture2D);
Debug.Log("Texture URL: " + www.url);
}
}
}
And I have then added the image as the public Image img;
Can anyone tell my why unity load the image into all imageviews in my application instead of just the one?
You say
I have many different images,
but I guess you mean different Image components where you very probably referenced the same texture from your assets multiple times
So what your code actually does is overwriting whatever texture asset is referenced by your image => it is also changed in all other images/materials etc that reference the same texture asset.
You should rather create a new texture, load the data into it and change the image's texture reference:
// Create new texture
// Size values don't matter because texture will be overwritten
var newTexture = new Texture2D(2,2);
// Load image I new texture
www.LoadImageToTexture(newTexture);
// Use the reference to that new texture
img.mainTexture = newTexture;
The library takes a maxSize parameter for scaling, which applies to the longest of both dimensions. It seems that the work-around solution to scale by one dimension would be to manually run scaleImage() in an onSubmitted callback by calculating what the maxSize should be based on the original image size to get a result with the desired height or width, but this has its own hurdles:
It makes sense that using addFiles() inside of a onSubmitted callback would trigger another onSubmitted event; but if I use addFiles() to add the thumbnail, the thumbnail shows up in the UI list, and this triggers another onSubmitted causing another thumbnail to be generated, which keeps going in a loop.
I need to generate a thumbnail retrained by (a maxHeight of 240 pixels and a maxWidth of 320 pixels) and upload the thumbnail to a separate S3 bucket when uploadStoredFiles() is called, without triggering another onSubmitted event and without showing the thumbnail as a "duplicate" entry in the UI file list. What is the best way to do this in Fine-Uploader?
Some sample code:
function makeThumbnail() {
// FIXME to avoid duplicate, put this in the compression success
var thumbnailPromise = uploader.scaleImage(id, {
maxSize: 123,
quality: 45,
customResizer: !qq.ios() && function (resizeInfo) {
return new Promise(function (resolve, reject) {
pica.resizeCanvas(resizeInfo.sourceCanvas, resizeInfo.targetCanvas, {}, resolve)
});
}
});
thumbnailPromise.then(
function (blob) {
console.log(URL.createObjectURL(blob));
uploader.addFiles(blob);
},
function (err) {
}
);
}
Before you pass a scaled Blob into addFiles, simply add a custom property to the Blob object, something like blob.myScaledImage = true. Then, when handling an onSubmitted callback, retrieve the associated Blob using the getFile API method. If the Blob contains your custom property, don't re-scale it.
The problem is that I am opening workflow designer dynamically from one shell application and I don't have reference to Canvas. I am able to save the WF4 as image but the image is not getting saved properly and contains left & top margins. I followed many articles to get it working but no success. I referred to following article as well.
Saving a canvas to png C# wpf
I am using the below function. I don't have any reference to canvas.
private BitmapFrame CreateWorkflowImage()
{
const double DPI = 96.0;
Visual areaToSave = ((DesignerView)VisualTreeHelper.GetChild(this.wd.View,
0)).RootDesigner;
Rect bounds = VisualTreeHelper.GetDescendantBounds(areaToSave);
RenderTargetBitmap bitmap = new RenderTargetBitmap((int)bounds.Width,
(int)bounds.Height, DPI, DPI, PixelFormats.Default);
bitmap.Render(areaToSave);
return BitmapFrame.Create(bitmap);
}
Please help on this.
I am able to resolve the issue by referring to again the following link
Saving a canvas to png C# wpf
I got the reference to canvas by using following code
Visual canvas= ((DesignerView)VisualTreeHelper.GetChild(this.WorkflowDesigner1.View, 0)).RootDesigner;
This has resolved the border/margin issue.
Please have a look here: http://blogs.msdn.com/b/flow/archive/2011/08/16/how-to-save-wf4-workflow-definition-to-image-using-code.aspx
let’s look at how to generate an image of the workflow definition using the standard mechanism of WPF. After all, workflow designer canvas is a WPF control.
BitmapFrame CreateWorkflowDefinitionImage()
{
const double DPI = 96.0;
// this is the designer area we want to save
Visual areaToSave = ((DesignerView)VisualTreeHelper.GetChild(
this.workflowDesigner.View, 0)).RootDesigner;
// get the size of the targeting area
Rect size = VisualTreeHelper.GetDescendantBounds(areaToSave);
RenderTargetBitmap bitmap = new RenderTargetBitmap((int)size.Width, (int)size.Height,
DPI, DPI, PixelFormats.Pbgra32);
bitmap.Render(areaToSave);
return BitmapFrame.Create(bitmap);
}
The above C# method is very straightforward. Just get the workflow diagram part of the workflow designer and create an in-memory image of it using some WPF API. The next thing is simple: create a file and save the image.
void SaveImageToFile(string fileName, BitmapFrame image)
{
using (FileStream fs = new FileStream(fileName, FileMode.Create))
{
BitmapEncoder encoder = new JpegBitmapEncoder();
encoder.Frames.Add(BitmapFrame.Create(image));
encoder.Save(fs);
fs.Close();
}
}
At last, let’s try to invoke the above 2 methods in the OnInitialized() method to hook it up and then close the application.
protected override void OnInitialized(EventArgs e)
{
// ...
this.SaveImageToFile("test.jpg", this.CreateWorkflowDefinitionImage());
Application.Current.Shutdown();
}
I would like to validate by file dimensions (resolution).
on the documentation page there is only information regarding file name and size, nothing at all in the docs about dimensions, and I also had no luck on Google.
The purpose of this is that I don't want users to upload low-res photos to my server. Thanks.
As Ray Nicholus had suggested, using the getFile method to get the File object and then use that with the internal instance object qq.ImageValidation to run fineuploader's validation on the file. A promise must be return because this proccess is async.
function onSubmit(e, id, filename){
var promise = validateByDimensions(id, [1024, 600]);
return promise;
}
function validateByDimensions(id, dimensionsArr){
var deferred = new $.Deferred(),
file = uploaderElm.fineUploader('getFile', id),
imageValidator = new qq.ImageValidation(file, function(){}),
result = imageValidator.validate({
minWidth : dimensionsArr[0],
minHeight : dimensionsArr[1]
});
result.done(function(status){
if( status )
deferred.reject();
else
deferred.resolve();
});
return deferred.promise();
}
Remained question:
Now I wonder how to show the thumbnail of the image that was rejected, while not uploading it to the server, the UI could mark in a different color as an "invalid image", yet the user could see which images we valid and which weren't...
- Update - (regarding the question above)
While I do not see how I could have the default behavior of a thumbnail added to the uploader, but not being uploaded, but there is a way to generate thumbnail manually, like so:
var img = new Image();
uploaderElm.fineUploader("drawThumbnail", id, img, 200, false);
but then I'll to create an item to be inserted to qq-upload-list myself, and handle it all myself..but still it's not so hard.
Update (get even more control over dimensions validation)
You will have to edit (currently) the qq.ImageValidation function to expose outside the private function getWidthHeight. just change that function deceleration to:
this.getWidthHeight = function(){
Also, it would be even better to change the this.validate function to:
this.validate = function(limits) {
var validationEffort = new qq.Promise();
log("Attempting to validate image.");
if (hasNonZeroLimits(limits)) {
this.getWidthHeight().done(function(dimensions){
var failingLimit = getFailingLimit(limits, dimensions);
if (failingLimit) {
validationEffort.failure({ fail:failingLimit, dimensions:dimensions });
}
else {
validationEffort.success({ dimensions:dimensions });
}
}, validationEffort.success);
}
else {
validationEffort.success();
}
return validationEffort;
};
So you would get the fail reason, as well as the dimensions. always nice to have more control.
Now, we could write the custom validation like this:
function validateFileDimensions(dimensionsLimits){
var deferred = new $.Deferred(),
file = this.holderElm.fineUploader('getFile', id),
imageValidator = new qq.ImageValidation(file, function(){});
imageValidator.getWidthHeight().done(function(dimensions){
var minWidth = dimensions.width > dimensionsLimits.width,
minHeight = dimensions.height > dimensionsLimits.height;
// if min-width or min-height satisfied the limits, then approve the image
if( minWidth || minHeight )
deferred.resolve();
else
deferred.reject();
});
return deferred.promise();
}
This approach gives much more flexibility. For example, you would want to have different validation for portrait images than landscape ones, you could easily identify the image orientation and run your own custom code to do whatever.