In Windows Azure, what file should I put storage information in for ASP.NET MVC? - visual-studio

I'm toying with Windows Azure creating an ASP.NET MVC project, I've checked the WAPTK (Windows Azure Platform Training Kit), Google, and here for answers to my question, but I couldn't find the answer. In an ASP.NET MVC project, in what file do I create containers for cloud storage? (In the WAPTK, there's a hands-on lab that uses webforms and puts storage containers in the _Default partial class.)

In an ASP.NET MVC project, in what file do I create containers for cloud storage? (In the WAPTK, there's a hands-on lab that uses webforms and puts storage containers in the _Default partial class.)
Generally I'd recommend you set up the container access (including the create) in some nicely encapsulated class - preferably hiding behind an interface for easy testability.
I'd recommend:
put this class and it's interface in a class library (not in your ASP.Net project)
put the config details in the csdef/cscfg files
if you only plan to use a fixed list of containers, then either:
create these ahead of installing your app - e.g. from a simple command line app
or create these from a call in the init of Global.asax
if you plan to dynamically create containers (e.g. for different users or actions) then create these from Controller/Service code as is required - e.g. when a user signs up or when an action is first performed.
if actions might occur several times and you really don't know if the container will be there or not, then find some way (e.g. an in-memory hashtable or in-sql persistent table) to help ensure that you don't need to continually call CreateIfNotExist - remember that each call to CreateIfNotExist will slow your app down and cost you money.
for "normal" access operations like read/write/delete, these will typically be from Controller code - or from Service code sitting behind a Controller
If in doubt, think of it a bit like "how would I partition up my logic if I was creating folders on a local disk - or on a shared network drive"
Hope that helps a bit.
Stuart

I am not sure if I understand you correctly, but generally speaking files and Azure doesn't fit together well. All changes stored at the local file system are volatile and only guaranteed to live as long as the current Azure instance. You can however create a blob and mount it as a local drive, which makes the data persistent. This approach has some limitations, since it will allow one azure instance writing and maximum 8 readers.
So instead you should probably use blobs rather than files. The problem of knowing which blob to access would then be solved by using Azure table storage to index the blobs.
I recently went to a presentation where the presenter had investigated quite a bit about Azure table storage and his findings was that limiting partition keys to groups of 100-1000 elements would give the best performance. (Partition keys are used internally by azure to determine which data to group.)

You should definitely use blob storage for your files. It's not particularly difficult and as this is a new project there is no need to use Azure Drives.
If you are just using blob storage to serve up images for your site then you can reference them with a normal tag in your html e.g
<img src="http://myaccountname.blob.core.windows.net/containername/filename">
This will only work if the file or container is shared and not secured. This is fine if you are just serving up static content on html pages.
If you want to have secured access to blobs for a secure site then you have to do a couple of things, firstly your website will need to know how to access your blobs.
in your servicedefinition.csdef file you will need to include
<Setting name="StorageAccount" />
and then add
<Setting name="StorageAccount" value="DefaultEndpointsProtocol=https;
AccountName=[youraccountname];AccountKey=[youraccountkey]" />
to your serviceconfiguration.csfg file.
Then you can use the windows azure SDK to access that account from within your web role. Starting with
Dim Account As CloudStorageAccount = CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue("StorageAccount"))
Dim blobClient As CloudBlobClient = Account.CreateCloudBlobClient
From there you can
Read/write to blobs
delete blobs
list blobs
create time limited urls using shared access signatures.
There is a great resource here from Steve Marx. Which although it is about accessing blob storage from silverlight which you are not using does give you lots of information in one place.
Your question wasn't very specific but this should give you some idea where to start.
#Faester is correct you will probably need some resource either table storage or SQL Azure to store references to these files.

Related

When using asp.net MVC core + EF core + ability to encrypt files. What will be the differences between Blob, FileStream & File System, to manage files

I am working on an asp.net core mvc web application, the web application is a document management workflow. where inside each of the workflow steps users can upload documents, as follow:-
users can upload documents with the following restriction; a file can not exceed 5 MB + all the documents inside a workflow can not exceed 50 MB, unless admin approves it. they can upload as many documents as they want.
we will have lot of views which will show the step and all its documents attached to it, and users can chose to download the documents.
we can have unlimited number of workflows. as the more users register with our application the more workflow will be created.
certain files can be marked as confidential, so they should be encrypted when storing them either inside the database or inside the file system.
we are planning to use EF core as the data access layer for our web application + SQL server 2016 or 2017.
now my question is how we should manage our files, where i found these 3 approaches.
Blob.
FileStream
File system.
now the first approach, will allow us to encrypt the files inside the database + will work with EF. but it will have a huge drawback on performance, since opening a file or querying the files from database means they will be loaded inside the hosting server memory. so since we are seeking for an extensible approach, so i think this approach will not work for us since it is less scalable.
Second approach. will have better performance compared to first approach (Blob), but FileStream are not supported with EF + does not allow encryption. so we have to exclude this also.
third approach. of storing the files inside a folder which have the workflow ID + store the link to the file/folder inside the DB. will allow us to encrypt the files + will work with EF. and have a better performance compared to Blob (not sure if this is valid for FileStream). the only drawback, is that we can not achieve Atomic-ity between the files and their related records inside the database. but with adding some code we can handle this by our-self. for example deleting a database record will delete all its documents inside the folder, and we can add some background jobs to make sure all the documents have database records, other wise to delete the documents..
so based on the above i found that the third approach is the best fit for our need? so can anyone advice on this please? are my assumption correct? and is there a fourth appraoch or a hybrid appraoch that can be a better fit for us?
Although modern RDBMS have been optimised for data storage with the perks of integrity and atomicity, databases should be considered the least most alternative (StackOverflow posts like this and this shall corroborate the above) and therefore the third option mentioned or an improvement thereof shall be the vote.
For instance, a potential improvement would be to store the files renamed to a hash of the content and database the hash which shall eliminate all OS restrictions on subdirectories/files, filenames, and paths. Moreover, with a well structured directory layout duplicates could be filtered out.
The User-defined Database Functions shall aid in achieving atomicity which will efface the need of background jobs. An excellent guide on UDFs particularly for the use of accessing filesytem and invoking an executable can be found here.

Storing files in a webserver

I have a project using MEAN stack that uploads imagefiles to a server and the names of the images to db. Then the images are shown for users of the applications kinda like an image gallery.
I have been trying to figure out an effiecent way of storing the imagefiles. atm im storing them under the angular application in a folder /var/www/app/files
What are the usual ways of storing them in a cloud server like digital ocean, heroku and many others.
Im a bit thrown off by the fact they offer many options for datastorage.
Lets say that hundres of thousands of images were uploaded by the application to the server.
Saving all of them in inside your front end app in a subfolder might not be the best solution? or am i wrong with this.
I am very new to these webserver cloud services and how they actually operate.
Can someone clarify on what would be the optimal solution.
Thanks!
Saving all of them in inside your front end app in a subfolder might not be the best solution?
You're very right about this. Over time this will get cluttered, and unless you use some very convoluted logic, will slow down your server.
If you're using Angular and this is in the public folder sent to every user, this is even worse.
The best solution to this is using something like an AWS S3 Bucket (DigitalOcean has Block Storage and I believe Heroku has something a bit different). These services offer storage of files, and essentially act as logic-less servers. You can set some privacy policies and other settings, but there's no runtime like NodeJS that can do logic for you.
Your Node server (or any other server setup) interfaces with this storage server, and handles most of the fetching and storing of files. You can optionally limit these storage services so they can only communicate with your Node server, so any file traffic would be done through your Node server.

Handling large object in stateless environment

We have various windows services that load up a large amount of data i.e. mostly settings, from a database into an object which is used whenever calls are made to our various .net remoting functions (I know it's old!!). Having this object containing all these settings in memory saves us having the query the database constantly or load the data from a cache whenever queries are executed.
Settings in this "large" object are collections of data, from id, path, text, etc...
We want to move away from .net remoting to wcf and potentially get rid of our windows services and run the lot under IIS (and eventually Azure), but being stateless, I'm wondering how should we handle this?
1) What's the best method you can think of? From experience preferrably.
One suggestion that was made to me was to return all of this to the client, cache it and use only the relevant settings when making a wcf call.
2) Numerous services we have are polling services, constantly monitoring, databases, file locations, ftp locations, etc... How would you recommend to handle this in a stateless environment?? I can't see how this will be handled.
We use SQL Server, but I don't want to rely too heavily on the build-in features as we could potentially have to suppor the likes of mySQL & Oracle.
Thanks.
Thierry
You could store these settings in the AppSettings section of the config file (Web.config for IIS). Using the ConfigurationManager class, you can retrieve the relevant values as needed.
If you prefer to store a static instance of your settings object, suggest implementing a Singleton pattern for the same. Jon Skeet's article is a great starting point.
Hope this helps.

Best way to store uploaded files in a Spring MVC environnment

The question is quite easy: what is the best way to store uploaded files in a clustered Spring MVC environnment?
Example: let's say I'm coding a social network and I have to possibility to upload my personal profile picture. The user will then have at most one image associated with his profile.
A solution can then be to add a blob column to the users table in the DB — I read this is good when in a clustered environment (easier to scale the DB than a folder containing lots of images). Is it true? How can I achieve this in a JPA (Hibernate and PostgreSQL) environment? I saw there is the #Lob annotation but what type of variable should I use? byte[]?
Another solution is to store them on the hard drive. What is the best path to store these images? In the webapp folder? In the classpath? In another folder outside the Spring context?
Thank you very much for your help.
Update: an important detail that I forgot to say. The administration dashboard (CMS/back end) of this website will be coded in PHP (I know, I know...) while the front-end will be coded in Java Spring (MVC). The database is all managed by the java part (JPA). Whatever the final choice will be, it has to be compatible with this requirement.
I'd rather not store it in DB. The best place is some server for static files (or CDN).
If you really need you can store is as a Lob but I think it's a bad idea for performance scalability reasons.
What is more important, databases seems to be more expensive than simple Content Delivery Networks.

What open source cloud storage system offer an append-only mode for buckets, directories, etc

I'm curious if any cloud storage system could be configured to provide the following workflow :
Anonymous users may upload messages/files into identifiable locations which we'll call buckets.
All users should have read access to all messages/files, but no anonymous user should have permissions to modify or delete them.
Buckets have associated public keys which a moderator uses to authenticate approvals or deletions of uploads.
Unapproved messages/files are eventually culled by the system to save space.
I suspect the answer might be "Tahoe-LAFS would love for someone to implement append-only mutable files, but nobody has done so yet."
I've surveyed a number of OSS projects in the storage space and not encountered anything that would provide this workflow purely by configuration and without writing code.
While not OSS, the lowest level of Windows Azure storage is actually implemented via an append only mechanism. A video, presentation, and whitepaper can all be found here and the details in the whitepaper would be useful to anyone looking to implement something like this for Tahoe-LAFS or any other OSS cloud storage system.

Resources