How to give RW permissions on a folder in Windows Azure? - asp.net-mvc-3

I am deploying an MVC 3.0 web app to Windows Azure. I have an action method that takes a file uploaded by the user and stores it in a folder within my web app.
How could I give RW permissions to that folder to the running process? I read about start up tasks and have a basic understanding, but I wouldn't know,
How to give the permission itself, and
Which running process (user) should I give the permission to.
Many thanks for the help.
EDIT
In addition to #David's answer below, I found this link extremely useful:
https://www.windowsazure.com/en-us/develop/net/how-to-guides/blob-storage/

For local storage, I wouldn't get caught up with granting access permissions to various directories. Instead, take advantage of the storage resources available specifically to your running VM's. With a given instance size, you have local storage available to you ranging from 20GB to almost 2TB (full sizing details here). To take advantage of this space, you'd create local storage resources within your project:
Then, in code, grab a drive letter to that storage:
var storageRoot = RoleEnvironment.GetLocalResource("moreStorage").RootPath;
Now you're free to use that storage. And... none of that requires any startup tasks or granting of permissions.
Now for the caveat: This is storage that's local to each running instance, and isn't shared between instances. Further, it's non-durable - if the disk crashes, the data is gone.
For persistent, durable file storage, Blob Storage is a much better choice, as it's durable (triple-replicated within the datacenter, and geo-replicated to another datacenter) and it's external to your role instances, accessible from any instance (or any app, including your on-premises apps).
Since blob storage is organized by container, and blobs within container, it's fairly straightforward to organize your blobs (and store pretty much anything in a given blob, up to 200GB each). Also, it's trivial to upload/download files to/from blobs, either to file streams or local files (in the storage resources you allocated, as illustrated above).

Related

Can't create multiple striped disk volume in Azure VM

I created an Azure Medium instance Windows 2012 Server and I'm having a problem striping together multiple Azure data disks into a single volume using the Server Manager tool.
In Azure I provisioned the medium instance and then created 4 data disks of 60GB each. I then rdp'ed into the server and inside Server Manager under File and Storage Services\Volumes I saw in the Disks section my 4 data disks listed under the C:\ and D:\ drives that come with this instance. I initialized my 4 data disks (later I also tried NOT initializing them) but when I clicked on "Storage Pools" in the nav bar under the Virtual Disk section I only saw 1 of my data disks.
I saw no way to add any of the other 3 data disks into my Storage Pool and then of course into the subsequent Virtual Disk. This problem limits me to just one data disk in my Virtual Disk. I have tried this many different times and the result is always the same.
Does anyone know what can be causing this or have steps to do the same thing I'm trying to do?
Thanks
If you're wondering why I'm trying to stripe these instead of using just 1 large data disk, this article explains the performance benefits of doing so:
http://blog.aditi.com/cloud/windows-azure-virtual-machines-lessons-learned/
In my blog post I explain how to do this, although perhaps the level of detail you are looking for isn't there. Still, everyone that followed this post (it was a lab) was able to create the striped volume. The blog post is a complete lab; go down half way to see the section about the striped volume. Let me know if you have any questions.
http://geekswithblogs.net/hroggero/archive/2013/03/20/windows-azure-it-roadshow-lab-i.aspx
Thanks
I hit the same problem and some Googling revealed that this is a bug in Server Manager (sorry, can't find the link). The workaround is to use PowerShell to create the pool. These commands will create a new Storage Pool called "Storage" and assign all the available disks to it:
$spaces = (Get-StorageSubSystem | where {$_.Model -eq "Storage Spaces"}[0]).UniqueID
New-StoragePool -FriendlyName "Storage" -StorageSubSystemUniqueId $spaces -PhysicalDisks (Get-PhysicalDisk -CanPool $true)

Is it possible to trash an Azure role host and get it started on the same host without cleanup?

Suppose my Azure role creates a lot of temporary files in Windows temporary folder and forgets to delete them. At some point it will receive "can't create temporary file" error. Suppose that once that happens my role code throws an exception out of RoleEntryPoint.Run() and the role is restarted.
I'm not talking about perfect Azure aware code here. My role might use third-party black box code that would now nothing about Azure and "local storage" and would just call System.IO.Path.GetTempPath() and thus create files right in some not Azure friendly location.
The problem is that if the role is started on the very same host and the temporary folder is not cleaned up by some third party the folder is still full of files and the role will be unable to function. According to this answer it might happen that local changes are preserved for my role which is a huge problem in the above scenario.
Are local changes like created temporary files guaranteed to be reset when a role is restarted? How do I ensure that the started role is in reasonably clean state?
The role gets reset on new deployments, upgrades, and newly scaled instances from the golden image (base guest OS vhd). Generally for reboots and crashes, you get the same VHD and machine.
The code you write will not have permission to write to the OS drive (D:) - without elevation that is (or logging in via RDP to do this). Further, there is a quota on the user's role root drive (E:) that will prevent you from accidentally filling the drive with files. This used to be 10% of the package size was all you were allowed to write. There is also a quota on the resource drive (C:), but that is much more generous and depends on VM size.
Nothing will be cleaned up on non-local resource drives but you will eventually get errors if you try to exceed quotas. You can turn off sticky storage on local resources and they will be cleaned up on reboot. Of course, like other changes to the disk, these non-local resource temp files will occasionally be lost when the guest OS is upgraded (or underlying root OS). If you are running elevated and really screw up your installation (which you can do), you will need to hit the "Reimage" button on the portal and it will all go back to the golden image.

WindowsAzure: Is it possible to set directory permissions within the web.config?

A PHP scriptof mine wants to write into a log folder, the resulting error is:
Unable to open the log file "E:\approot\framework\log/dev.log" for writing.
When I set the writing permissions for the WebRole User RD001... manually it works fine.
Now I want to set the folder permissions automatically. Is there an easy way to get it done?
Please note that I'm very new to IIS and the stuff around, I would appreciate precise answers, thx.
Short/Technical Response:
You could probably set permissions on a particular folder using full-trust and a startup taks. However, you'd need to account for a stateless OS and changing drive letters (possible, not likely) in this script, which would make it difficult. Also, local storage is not persisted, so you'd have no way to ensure this data stayed in the case of a reboot.
Recommendation: Don't write local, read below ...
EDIT: Got to thinking about this, and while I still recommend against this, there is a 3rd option: You can allocate local storage in the service config, then access it from PHP using a dll reference, then you will have access to that folder. Please remember local storage is not persisted, so it's gone during a reboot.
Service Config for local:
http://blogs.mscommunity.net/blogs/dadamec/archive/2008/12/11/azure-reading-and-writing-with-localstorage.aspx
Accessing config from php:
http://phpazure.codeplex.com/discussions/64334?ProjectName=phpazure
Long / Detailed Response:
In Azure, you really are encouraged to approach things as a platform and not as "software on a server". What I mean there is that ideas such as "write something to a local log file" are somewhat incompatible with the cloud "idea". Depending on your usage, you could (and should) convert this script to output this data to some cloud-based or external storage, vs just placing it on the disk.
I would suggest modifying this script to leverage the PHP Azure SDK and write these log entries out to table or blob storage in Azure. If this sounds good, please provide the PHP and I can give an exact example.
The main reason for that (besides pushing the cloud idea) is that in Azure, you cannot assume the host machine ("role instance") will maintain an OS state, so while you can set some things such as folder permissions, you can't rely on them sticking that way. You have no real way to guarantee those permissions won't be reset when the fabric has to update your role and react to some lower level problem. For example, a hard-drive cage on the rack where your current instance lives could fail. If the failure were bad enough, the Fabric controller would need to rebuild your instance. When that happens, your code is moved to an entirely different server, so the need would arise to re-set those permissions. Also, depending on the changes, the E:\ could all of a sudden need to be the F:\ or X:\ drive and you wouldn't know.
Its much better to pretend (at some level) that your application is running "in Azure" and not "on a server in azure", so you make no assumptions about the hosting environment. So anything you need outside of your code (data, logs, audits, etc) should be stored somewhere you can control (Azure Storage, external call-out, etc)

What is a good way to access external data from aws

I would like to access external data from my aws ec2 instance.
In more detail: I would like to specify inside by user-data the name of a folder containing about 2M of binary data. When my aws instance starts up, I would like it to download the files in that folder and copy them to a specific location on the local disk. I only need to access the data once, at startup.
I don't want to store the data in S3 because, as I understand it, this would require storing my aws credentials on the instance itself, or passing them as userdata which is also a security risk. Please correct me if I am wrong here.
I am looking for a solution that is both secure and highly reliable.
which operating system do you run ?
you can use an elastic block storage. it's like a device you can mount at boot (without credentials) and you have permanent storage there.
You can also sync up instances using something like Gluster filesystem. See this thread on it.

What are some options to store images securely other than storing on database?

Typically file servers are used to store images for a web application. For more security and control you'd go for storing images in database. But this proves to be complex and slow.
Are there other mainstream options available other than db/file server, to store images securely with user permissions, etc.
Edit: I'm deployed on amazon cloud and use postgresql as db.
SQL Server 2008 offers the Filestream storage wich allows you to store data in the filesystem, yet accessing it thorugh the database. It can be useful if your only concern with using a database is performance.
If images are stored in a folder that has no direct web access permissions you can then do
<img src="getimage.aspx?id=1234">
and have your GetImage "page" do any appropraite permissions test (e.g. using session ID) and then "deliver" the image from the secure folder.
Downside is that the image is not cached I think? but I expect that is true of the database route too.
Storing images in the phyiscal database bloats the database, increasing backup & restore times; but it provides a single container which is great if you want to move everything to a new / multiple servers, or if ensuring referential integrity between Image and other data in the DB is important
Are you are concerned about people guessing a URL and directly accessing an image?
If so, you can still place the images on the filesystem, just outside your www directory. You create a page called ImageServer.php/.aspx/.jsp, grabs the image off of the filesystem and then serves it in response to a URL like:
ImageServer.php?image=BlueWhale.png
If you do this, be careful to correctly set the MIME type and expiry headers because Apache/IIS won't do it for you.

Resources