How to upload image in Windows Azure platform: best approach - asp.net-mvc-3

Ihave a register form with an Image Upload and it doesn't work when I upload my package application in my Windows Azure server.
The image address in the server looks like this:
F:\sitesroot\0\Uploads\Users\9259826_2121813246965_1294840438_2490950_6619588_n.jpg
If I had this image url like this, with it's relative path:
http://dealma.cloudapp.net/Uploads/Users/9259826_2121813246965_1294840438_2490950_6619588_n.jpg
I would already solve the problem.
The current code I'm using to upload is this:
if (userImg != null && userImg.ContentLength > 0)
{
try
{
var fileName = Url.Encode(userImg.FileName);
//no overwrite files
var pathToCheck = Server.MapPath("~/Uploads/Users/" + fileName);
var savePath = Server.MapPath("~/Uploads/Users/");
var tempfileName = fileName;
int counter = 2;
while (System.IO.File.Exists(pathToCheck))
{
tempfileName = counter.ToString() + fileName;
pathToCheck = savePath + tempfileName;
counter++;
}
fileName = tempfileName;
var finalImg = Path.Combine(savePath, fileName);
userImg.SaveAs(finalImg);
//Img name
userSet.Picture = finalImg;
userSet.Thumbnail = finalImg;
}
catch (Exception ex)
{
Response.Write("Não foi possível fazer upload do arquivo: " + ex.Message);
}
}
Does anyone knows how to solve this problem?

As corvus stated, you are writing to "local storage" which is volatile and not shared across multiple instances of your virtual machine.
Blob storage lets you store arbitrary files, images, etc. Each item gets stored in its own blob. You also have the notion of a "container" - think of it as a top-level directory folder. There are no nested containers, but you can emulate them with path characters in the name (skip this for now, as you need a quick solution).
If you download the Windows Azure Platform Training Kit and look at the lab "Introduction to Cloud Services", it shows a Guestbook application, where photos are uploaded to blob storage. You will see how to set up a storage account, as well as writing the code to push your file to a blob instead of the local file system. Here's a snippet from the sample:
Initialize blob client, and set up container to store your files:
var storageAccount =
CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
// create blob container for images
blobStorage = storageAccount.CreateCloudBlobClient();
CloudBlobContainer container = blobStorage.GetContainerReference("uploads");
container.CreateIfNotExist();
Now, in your upload handler, you'd write to a blob instead of local file system:
string uniqueBlobName = string.Format("uploads/image_{0}{1}",
Guid.NewGuid(), Path.GetExtension(UserImg.FileName));
CloudBlockBlob blob = blobStorage.GetBlockBlobReference(uniqueBlobName);
blob.Properties.ContentType = UserImg.PostedFile.ContentType;
// note: there are several blob upload methods -
// choose the best one that fits your app
blob.UploadFromStream(UserImg.FileContent);
You'll see the full working sample once you download the Platform Training Kit.

You are trying to save the image to the virtual machine where web role handling your request resides.
Probably there is more than one web role instance in your application. So, the file gets saved on one machine, but next request is served by another web role and virtual machine that doesn't have this file.
So, good idea is to save all data that needs to be accessible from any web role, to blobs. If you have some static data, you can put this data into package with your web role. All other data should reside in blobs.
If you don't want to modify the code of your application, you can map a part of blob storage as another hard drive to every instance of your web role. In this case, you just need to write received data to this mapped disk. The results will be accessible from any web role.

Related

How to create a sharedaccesssignature URL for my blob file in Azure Data Lake Gen2 in csharp

I'm using the Azure.Storage.Files.DataLake nuget package to write and append file on my Azure Storage account that is enabled for Data Lake Gen2 (including hierarchy).
However, I don't seem to find how to generate a SAS-url to access a specific blob, without authenticating a user. Is it possible to have this done with the package, or should I fall back to REST operations for this?
Thanks for any insights
Seems I just had to click through to the GitHub page and there's a good example of how to do this in the Unit tests.
I copied the permalink to the specific part here:
https://github.com/Azure/azure-sdk-for-net/blob/89955a90641742a2cdb0acd924f90d02b1be34ec/sdk/storage/Azure.Storage.Files.DataLake/samples/Sample02_Auth.cs#L126
AccountSasBuilder sas = new AccountSasBuilder
{
Protocol = SasProtocol.None,
Services = AccountSasServices.Blobs,
ResourceTypes = AccountSasResourceTypes.All,
StartsOn = DateTimeOffset.UtcNow.AddHours(-1),
ExpiresOn = DateTimeOffset.UtcNow.AddHours(1),
IPRange = new SasIPRange(IPAddress.None, IPAddress.None)
};
// Allow read access
sas.SetPermissions(AccountSasPermissions.List);
// Create a SharedKeyCredential that we can use to sign the SAS token
StorageSharedKeyCredential credential = new StorageSharedKeyCredential(StorageAccountName, StorageAccountKey);
// Build a SAS URI
UriBuilder sasUri = new UriBuilder(StorageAccountBlobUri);
sasUri.Query = sas.ToSasQueryParameters(credential).ToString();

Setting Google Photos date

I uploaded many photos with no EXIF data, but with their date in the name. Google Photos used upload date to sort them. I'd like to use the date in their name to modify them.
So far I tried to use Drive API to change modification date, I can change it but it is not used. I also tried to modify imageMediaMetadata.date, but it seems to be read-only to me.
Code:
function myFunction() {
var files = DriveApp.getFilesByName("IMG-20150402-WA0002_1.jpg")
while (files.hasNext()) {
var file = files.next();
var name = file.getName().toUpperCase();
if (name.indexOf("-WA") > -1) {
if (name.indexOf("IMG-20") == 0 || name.indexOf("VID-20") == 0) {
var y = name.substr(4, 4);
var m = name.substr(8, 2);
var d = name.substr(10, 2);
var file2 = Drive.Files.get(file.getId());
file2.imageMediaMetadata.date = y+"-"+m+"-"+d+"T12:00:00.000Z";
var file3 = Drive.Files.patch(file2, file.getId());
Logger.log(name + " no ok " + file3.imageMediaMetadata.date); // same as file2
}
}
}
I could delete them, modify the original files and re-upload, but before that I'd like to be sure there is no other way.
Thank you.
Perhaps you could programmatically write an EXIF header to the files?
I would also be looking for a convenient way to supply photo date when uploading old photos using the new-ish Google Photos API. My photos do not necessarily have EXIF data; I tried setting the creation/last modified date on one of my JPEGs on my MacOS machines disk, then manually uploading it via the Google Photos web interface and the date of of the JPEG file on local disk becomes the photo's date in Google Photos, as expected. The file has no EXIF data and if did, it would not contain that same date, so apparently the google photos web uploader respects the local filesystem date for the photo.
I then tried to sniff the traffic using Charles Proxy, but apparently the web interface does not use the Google Photos API same way that us external developers would -- it doesnt POST to https://photoslibrary.googleapis.com/v1/uploads or so it seems. So I couldn't reverse engineer that process. Also I couldn't see where the file creation date was passed in.
What would be great is to have a HTTP header in the upload POST request to set this date. I dont see batchCreate (https://developers.google.com/photos/library/reference/rest/v1/mediaItems/batchCreate) method having any means of setting this.

Direct (and simple!) AJAX upload to AWS S3 from (AngularJS) Single Page App

I know there's been a lot of coverage on upload to AWS S3. However, I've been struggling with this for about 24 hours now and I have not found any answer that fits my situation.
What I'm trying to do
Upload a file to AWS S3 directly from my client to my S3 bucket. The situation is:
It's a Single Page App, so upload request must be in AJAX
My server and my client are not on the same domain
The S3 bucket is of the newest sort (Frankfurt), for which some signature-generating libraries don't work (see below)
Client is in AngularJS
Server is in ExpressJS
What I've tried
Heroku's article on direct upload to S3. Doesn't fit my client/server configuration (plus it really does not fit harmoniously with Angular)
ready-made directives like ng-s3upload. Does not work because their signature-generating algorithm is not accepted by recent s3 buckets.
Manually creating a file upload directive and logic on the client like in this article (using FormData and Angular's $http). It consisted of getting a signed URL from AWS on the server (and that part worked), then AJAX-uploading to that URL. It failed with some mysterious CORS-related message (although I did set a CORS config on Heroku)
It seems I'm facing 2 difficulties: having a file input that works in my Single Page App, and getting AWS's workflow right.
The kind of solution I'm looking for
If possible, I'd like to avoid 'all included' solutions that manage the whole process while hiding of all of the complexity, making it hard to adapt to special cases. I'd much rather have a simple explanation breaking down the flow of data between the various components involved, even if it requires some more plumbing from me.
I finally managed. The key points were:
Let go of Angular's $http, and use native XMLHttpRequest instead.
Use the getSignedUrl feature of AWS's SDK, instead on implementing my own signature-generating workflow like many libraries do.
Set the AWS configuration to use the proper signature version (v4 at the time of writing) and region ('eu-central-1' in the case of Frankfurt).
Here below is a step-by-step guide of what I did; it uses AngularJS and NodeJS on the server, but should be rather easy to adapt to other stacks, especially because it deals with the most pathological cases (SPA on a different domain that the server, with a bucket in a recent - at the time of writing - region).
Workflow summary
The user selects a file in the browser; your JavaScript keeps a reference to it.
the client sends a request to your server to obtain a signed upload URL.
Your server chooses a name for the object to put in the bucket (make sure to avoid name collisions!).
The server obtains a signed URL for your object using the AWS SDK, and sends it back to the client. This involves the object's name and the AWS credentials.
Given the file and the signed URL, the client sends a PUT request directly to your S3 Bucket.
Before you start
Make sure that:
Your server has the AWS SDK
Your server has AWS credentials with proper access rights to your bucket
Your S3 bucket has a proper CORS configuration for your client.
Step 1: setup a SPA-friendly file upload form / widget.
All that matters is to have a workflow that eventually gives you programmatic access to a File object - without uploading it.
In my case, I used the ng-file-select and ng-file-drop directives of the excellent angular-file-upload library. But there are other ways of doing it (see this post for example.).
Note that you can access useful information in your file object such as file.name, file.type etc.
Step 2: Get a signed URL for the file on your server
On your server, you can use the AWS SDK to obtain a secure, temporary URL to PUT your file from someplace else (like your frontend).
In NodeJS, I did it this way:
// ---------------------------------
// some initial configuration
var aws = require('aws-sdk');
aws.config.update({
accessKeyId: process.env.AWS_ACCESS_KEY,
secretAccessKey: process.env.AWS_SECRET_KEY,
signatureVersion: 'v4',
region: 'eu-central-1'
});
// ---------------------------------
// now say you want fetch a URL for an object named `objectName`
var s3 = new aws.S3();
var s3_params = {
Bucket: MY_BUCKET_NAME,
Key: objectName,
Expires: 60,
ACL: 'public-read'
};
s3.getSignedUrl('putObject', s3_params, function (err, signedUrl) {
// send signedUrl back to client
// [...]
});
You'll probably want to know the URL to GET your object to (typically if it's an image). To do this, I simply removed the query string from the URL:
var url = require('url');
// ...
var parsedUrl = url.parse(signedUrl);
parsedUrl.search = null;
var objectUrl = url.format(parsedUrl);
Step 3: send the PUT request from the client
Now that your client has your File object and the signed URL, it can send the PUT request to S3. My advice in Angular's case is to just use XMLHttpRequest instead of the $http service:
var signedUrl, file;
// ...
var d_completed = $q.defer(); // since I'm working with Angular, I use $q for asynchronous control flow, but it's not mandatory
var xhr = new XMLHttpRequest();
xhr.file = file; // not necessary if you create scopes like this
xhr.onreadystatechange = function(e) {
if ( 4 == this.readyState ) {
// done uploading! HURRAY!
d_completed.resolve(true);
}
};
xhr.open('PUT', signedUrl, true);
xhr.setRequestHeader("Content-Type","application/octet-stream");
xhr.send(file);
Acknowledgements
I would like to thank emil10001 and Will Webberley, whose publications were very valuable to me for this issue.
You can use the ng-file-upload $upload.http method in conjunction with the aws-sdk getSignedUrl to accomplish this. After you get the signedUrl back from your server, this is the client code:
var fileReader = new FileReader();
fileReader.readAsArrayBuffer(file);
fileReader.onload = function(e) {
$upload.http({
method: 'PUT',
headers: {'Content-Type': file.type != '' ? file.type : 'application/octet-stream'},
url: signedUrl,
data: e.target.result
}).progress(function (evt) {
var progressPercentage = parseInt(100.0 * evt.loaded / evt.total);
console.log('progress: ' + progressPercentage + '% ' + file.name);
}).success(function (data, status, headers, config) {
console.log('file ' + file.name + 'uploaded. Response: ' + data);
});
To do multipart uploads, or those larger than 5 GB, this process gets a bit more complicated, as each part needs its own signature. Conveniently, there is a JS library for that:
https://github.com/TTLabs/EvaporateJS
via https://github.com/aws/aws-sdk-js/issues/468
Use s3-file-upload open source directive having dynamic data-binding and auto-callback functions - https://github.com/vinayvnvv/s3FileUpload

path from isolated storage windows phone

hi i have a simple question how i can find the path of a file which had been already saved in the isolated storage
using (IsolatedStorageFileStream stream = new IsolatedStorageFileStream(App.filePath, FileMode.Create, store))
{
byte[] buffer = new byte[1024];
while (e.Result.Read(buffer, 0, buffer.Length) > 0)
{
stream.Write(buffer, 0, buffer.Length);
}
stream.Close();
}
now i would read this file
i need this path to use it as a parameter of method
Epub epub =new Epub([file path])
any help will be greatly appreciated
If a file is in IsolatedStorage you either put there yourself or it's the one created by the system to store settings.
If you put it there you must have had the path at some point previously. You just need to track the file names (and paths) you're using.
You should not try and access the settings file directly.
Try this
using (var AppStorage = IsolatedStorageFile.GetUserStoreForApplication())
{
string[] filenames=AppStorage.getFileNames();
//choose the filename you want or
//enumerate directories and read file names in each directory
string[] directories=AppStorage.getDirectories();
}
For each directory you have to add the filepath upto that directory just like in any windows file browsing.
Hope it helps.Post your further queries.
There is no need for you to get the path to the file if you are the one who put the file in the isolated storage. The entire guide to how properly read and write files to the app isostore is available here, and this should be your starting point.
The entire reading routine is limited to this:
using (IsolatedStorageFileStream fileStream = myIsolatedStorage.OpenFile("myFile.txt", FileMode.Open, FileAccess.Read))
{
using (StreamReader reader = new StreamReader(fileStream))
{
Console.WriteLine("Reading contents:");
Console.WriteLine(reader.ReadToEnd());
}
}
Where myIsolatedStorage is equal to IsolatedStorageFile.GetUserStoreForApplication() (akak your local app storage box).
No need for Reflection, as you showed in the comments. The can be relative to a folder, when you're attempting to read, so something like /MyFolder/myFile.txt will work as well, given that the folder exists.
Your problem is this - pushing the relative path in the isostore to the Epub class, which probably does not read directly from the isostore and uses a full path instead. The nature of the Windows Phone OS is such that it won't let a third-party application without proper permissions to access content directly through a full path reference. So you need to figure out a way to pass binary content to the class instead of a path.

How to download a jpg from web to project folder in MVC3?

Hello everyone I would like to ask How can I download .jpg file from web to my project's folder which I have created "uploads" ?
I'm trying to downlaod youtube thumbnail image to my" uploads" folder.
My controller:
var fileName = Path.GetFileName(http://img.youtube.com/vi/RUgd_GDPhYk/1.jpg);
var path = Path.Combine(Server.MapPath("~/uploads/"), fileName);
file.SaveAs(path);
Take a look at System.Net.WebClient, a .NET class which allows you to make requests for resources via HTTP.
http://msdn.microsoft.com/en-us/library/system.net.webclient(v=vs.100).aspx
Checked example provided.
var client = new System.Net.WebClient();
var uri = "http://img.youtube.com/vi/RUgd_GDPhYk/1.jpg";
// Here, we're just using the same filename as the resource we're after.
// You may wish to change this to include extra stuff because you'll
// inevitably run into a naming clash - especially with stuff like 1.jpg
var targetFilename = Path.GetFileName(uri);
client.DownloadFile(uri,
Path.Combine(Server.MapPath("~/uploads"), targetFilename));

Resources