I am designing a storage using azure blob storage. In a container, how to do access control between different blobs?
For example, under container "images", there are 2 blobs: design1/logo.png and design2/logo.png. How to make the access to design1/ and design2/ are exclusively?
Have you tried to configure the access permissions from RBAC?
https://learn.microsoft.com/en-us/azure/storage/blobs/assign-azure-role-data-access?tabs=portal
Related
I need to mount Azure Blob storage (where hierarchical namespace is disabled) from databricks. Mount command returns true but when I run fs.ls command, it returns error UnknownHostException. Please suggest
I got a similar kind of error. I tried and unmounted my blob storage account. Then, Remounted my storage account. Now, it's working fine.
Unmounting Storage account:
dbutils.fs.unmount("<mount_point>")
Mount Blob Storage:
dbutils.fs.mount(
source = "wasbs://<container>#<Storage_account_name>.blob.core.windows.net/",
mount_point = "<mount_point>",
extra_configs = {"fs.azure.account.key.vamblob.blob.core.windows.net":"Access_key"})
display(dbutils.fs.ls('/mnt/fgs'))
This command display(dbutils.fs.ls('/mnt/fgs')) returns all the files available in the mount point. You can perform all the required operations and then write to this DBFS, which will be reflected in your blob storage container also.
For more information refer this MS Document.
Azure storage allows for a default container called $root
As explained in the documentation
Using the Azure Portal. When I try to upload a scripts folder to my $root container I get the error
upload error for validate-form.js
Upload block blob to blob store failed:
Make sure blob store SAS uri is valid and permission has not expired.
Make sure CORS policy on blob store is set correctly.
StatusCode = 0, StatusText = error
How do I fix this?
I can upload to containers that are not called $root
[Update]
I guess SAS means Shared Access Signature.
I set up the container with Blob ( anonymous read access for blobs only)
I will try Container ( anonymous read access for containers and blobs)
[Update]
Changing the access policy made no difference.
The access policy is not displayed for $root
I am aware that one must put a file in a new folder in order for the folder to create. This is not that issue.
[Update]
Here is what my website blob looks like. I can do this for my website container but not my $root container.
As Create a container states as follows:
Container names must start with a letter or number, and can contain only letters, numbers, and the dash (-) character.
Every dash (-) character must be immediately preceded and followed by a letter or number; consecutive dashes are not permitted in container names.
All letters in a container name must be lowercase.
Container names must be from 3 through 63 characters long.
AFAIK, when managing your blob storage via Azure Storage Client Library, we could not create the container with the name $root. Also, you could leverage Azure Storage Explorer to manage your storage resources, and I assume that the limitation for creating the name for your blob container is applied.
But I tested it on azure portal and found that I could encounter the same issue as you mentioned. I could create the container with the name $root. And I could upload files to the root virtual directory via Azure Portal and Azure Storage Explorer. I assumed that the container name starts with $ is reserved by Azure and we could not create it. And you need to follow the limitation for container names, then you could upload your files as usual. The behavior on azure portal for creating the container name starts with $ is unusual. You could send your feedback here.
UPDATE:
As you mentioned about Working with the Root Container:
A blob in the root container cannot include a forward slash (/) in its name.
So you could not create virtual folder(s) under the Root Container as the normal blob container.
For example: https://myaccount.blob.core.windows.net/$root/virtual-directory/myblob, the blob name would be virtual-directory/myblob, but it is invalid and you could not create it.
UPDATE2:
Have you been able to create a folder within the $root container? If so can you tell me how to please.
We could not create a folder within the $root container, because the root container has the limitation for blob name. You could not treat azure blob storage as the file system, the folder info belongs to the blob name as follows:
I am deploying an MVC 3.0 web app to Windows Azure. I have an action method that takes a file uploaded by the user and stores it in a folder within my web app.
How could I give RW permissions to that folder to the running process? I read about start up tasks and have a basic understanding, but I wouldn't know,
How to give the permission itself, and
Which running process (user) should I give the permission to.
Many thanks for the help.
EDIT
In addition to #David's answer below, I found this link extremely useful:
https://www.windowsazure.com/en-us/develop/net/how-to-guides/blob-storage/
For local storage, I wouldn't get caught up with granting access permissions to various directories. Instead, take advantage of the storage resources available specifically to your running VM's. With a given instance size, you have local storage available to you ranging from 20GB to almost 2TB (full sizing details here). To take advantage of this space, you'd create local storage resources within your project:
Then, in code, grab a drive letter to that storage:
var storageRoot = RoleEnvironment.GetLocalResource("moreStorage").RootPath;
Now you're free to use that storage. And... none of that requires any startup tasks or granting of permissions.
Now for the caveat: This is storage that's local to each running instance, and isn't shared between instances. Further, it's non-durable - if the disk crashes, the data is gone.
For persistent, durable file storage, Blob Storage is a much better choice, as it's durable (triple-replicated within the datacenter, and geo-replicated to another datacenter) and it's external to your role instances, accessible from any instance (or any app, including your on-premises apps).
Since blob storage is organized by container, and blobs within container, it's fairly straightforward to organize your blobs (and store pretty much anything in a given blob, up to 200GB each). Also, it's trivial to upload/download files to/from blobs, either to file streams or local files (in the storage resources you allocated, as illustrated above).
I would like to access external data from my aws ec2 instance.
In more detail: I would like to specify inside by user-data the name of a folder containing about 2M of binary data. When my aws instance starts up, I would like it to download the files in that folder and copy them to a specific location on the local disk. I only need to access the data once, at startup.
I don't want to store the data in S3 because, as I understand it, this would require storing my aws credentials on the instance itself, or passing them as userdata which is also a security risk. Please correct me if I am wrong here.
I am looking for a solution that is both secure and highly reliable.
which operating system do you run ?
you can use an elastic block storage. it's like a device you can mount at boot (without credentials) and you have permanent storage there.
You can also sync up instances using something like Gluster filesystem. See this thread on it.
Typically file servers are used to store images for a web application. For more security and control you'd go for storing images in database. But this proves to be complex and slow.
Are there other mainstream options available other than db/file server, to store images securely with user permissions, etc.
Edit: I'm deployed on amazon cloud and use postgresql as db.
SQL Server 2008 offers the Filestream storage wich allows you to store data in the filesystem, yet accessing it thorugh the database. It can be useful if your only concern with using a database is performance.
If images are stored in a folder that has no direct web access permissions you can then do
<img src="getimage.aspx?id=1234">
and have your GetImage "page" do any appropraite permissions test (e.g. using session ID) and then "deliver" the image from the secure folder.
Downside is that the image is not cached I think? but I expect that is true of the database route too.
Storing images in the phyiscal database bloats the database, increasing backup & restore times; but it provides a single container which is great if you want to move everything to a new / multiple servers, or if ensuring referential integrity between Image and other data in the DB is important
Are you are concerned about people guessing a URL and directly accessing an image?
If so, you can still place the images on the filesystem, just outside your www directory. You create a page called ImageServer.php/.aspx/.jsp, grabs the image off of the filesystem and then serves it in response to a URL like:
ImageServer.php?image=BlueWhale.png
If you do this, be careful to correctly set the MIME type and expiry headers because Apache/IIS won't do it for you.