Error: cannot create metastore: Only account admin can create metastores - azure-databricks

I'm trying to create a databricks metastore:
resource "databricks_metastore" "this" {
name = "primary"
storage_root = format("abfss://%s#%s.dfs.core.windows.net/",
azurerm_storage_container.unity_catalog.name,
azurerm_storage_account.unity_catalog.name
)
force_destroy = true
}
However, I'm getting the following error:
Error: cannot create metastore: Only account admin can create metastores.
Looking through the Azure docs, I see the following:
Account admins can manage your Databricks account-level configurations, including creation of workspaces, Unity Catalog metastores, billing, and cloud resources. Account admins can add users to the account and assign them admin roles. They can also give users access to workspaces, as long as those workspaces use identity federation.
In my terraform project, I've logged in using az login and my provider declaration is like this:
provider "databricks" {
azure_workspace_resource_id = data.azurerm_databricks_workspace.this.id
host = local.databricks_workspace_host
}
According to these docs I have verified that my account (i.e. the one I used with az login) has both roles:
Any idea why I'm still getting this error?

You need to be admin in Databricks Account Console to be able to create a Metastore.

Related

how to override deny assignment so that I can access the databricks managed storage container?

I have a Databricks workspace provisioned in my own azure subscription for my own learning purposes.
I would like to access the containers in the Databricks managed storage account via the Azure Portal UI, however when I attempt to do so:
The client 'my#email' with object id 'myobjectid' has permission to
perform action 'Microsoft.Storage/storageAccounts/listKeys/action'
on scope '/my/storage/account'; however,
the access is denied because of the deny assignment
with name 'System deny assignment created by
Azure Databricks /my/workspace' and Id 'myid' at scope '/my/workspace'.
How can I grant all permissions to my azure account owner (me)?
You can't do this on the managed resource group created by Azure Databricks even if you're owner - it's a resource managed by Databricks, and it prevents direct access to the data because it stores some system information inside storage account. If you attempt to do this, you will get an error like this:
Failed to add User as Storage Blob Data Contributor for dbstorageveur7e23e27e4c : The client '....' with object id '...' has permission to perform action 'Microsoft.Authorization/roleAssignments/write' on scope '/subscriptions/..../resourceGroups/databricks-rg-...-jm5c8b2za1oks/providers/Microsoft.Storage/storageAccounts/dbstorageveur7e23e27e4c/providers/Microsoft.Authorization/roleAssignments/f2bc46d3-4aee-4d8f-803d-3d6324b5c094'; however, the access is denied because of the deny assignment with name 'System deny assignment created by Azure Databricks /subscriptions/.../resourceGroups/.../providers/Microsoft.Databricks/workspaces/...' and Id '99598a6270644ecdacfb23af7b0df9a0' at scope '/subscriptions/....resourceGroups/databricks-rg-...-jm5c8b2za1oks'..
From Microsoft Document,
When you attempt to access blob data in the Azure portal, the portal first checks whether you have been assigned a role with Microsoft.Storage/storageAccounts/listkeys/action. If you have been assigned a role with this action, then the portal uses the account key for accessing blob data. If you have not been assigned a role with this action, then the portal attempts to access data using your Azure AD account.
From Microsoft Document,
You need to have Microsoft.Authorization/roleAssignments/write access to assign Azure roles,
To give owner permission to user go to:
Subscriptions >> Access control (IAM) >> Add >> Add role assignment >> Owner >> Click on Next >> Select members >> select the user >> Save >> Next >> Review + assign

InvalidParameterException: Please ensure that the CreateExportTask caller has been granted s3:PutObject access to the bucket

I am trying to create a Scheduled Lambda which will call cloudwatch "createExportTask" function to export logs from Account A (which contains the logs i.e. source) to Account B (which contains the S3 Bucket i.e. destination)
However, I am getting the error as
InvalidParameterException: Please ensure that the CreateExportTask caller has been granted s3:PutObject access to the bucket.
I am not sure whether the issue is with the parameters in the "createExportTask" function to S3 access.
Refer Error Section - https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_CreateExportTask.html
I tried to export task manually, It works correctly.
I tried to export task via lambda in same account, It works correctly.
I have created an IAM role in account B(destination account) with s3 write permission and trust relationship with account A(source account)
I have given assume role permission for the IAM created in account B(destination account) to Lambda Execution Role IAM role in account A(source account)
Let me know, if you need IAM roles and trust relationship details.

Restrict permission when client create a sastoken for a blobstorage

I have a blobstorage where I drop files for an external partner to list the files and read them. I thought that a SAS token would be a perfect way for the external partner to access the container and read the file(s).
So I created a SAS token and realized that if I don't want to create new sas tokens every 10 minutes and send them to the partner I need to set the expire date of the token far into the future, and that is not good if the sastoken is leaked or that the day the token expire the solution will stop working.
So to fix that I could let the client create a sastoken by giving them an accesskey and accountname by using the StorageSharedKeyCredential-class. That works great, maybe to great since it's now the client that decides what permission the sas token should have. So the client might now upload files / create containers etc etc.
So my question is: Is there any way to restrict what kind of permissions the sas token have when the client create the sastoken, so our external partner only can read/list files in a specific container that I have decided.
Best Regards
Magnus
Regarding the issue, I think you want to know how to create service sas token. If so, please refer to the following code.
BlobContainerClient containerClient=new BlobContainerClient(new Uri("https://{account_name}.blob.core.windows.net/{container_name}),new StorageSharedKeyCredential());
BlobSasBuilder sasBuilder = new BlobSasBuilder()
{
BlobContainerName =containerClient.Name,
Resource = "c"
};
sasBuilder.ExpiresOn = DateTimeOffset.UtcNow.AddHours(1);
sasBuilder.SetPermissions(BlobContainerSasPermissions.Read);
sasBuilder.SetPermissions(BlobContainerSasPermissions.List);
Uri sasUri = containerClient.GenerateSasUri(sasBuilder);
To give a specific container permission, you can do this followings:
Find your container, select Access Policy under the settings blade, and click Add Policy. Select the permissions which you want to give this specific container. Also, public access level is container level.You could refer the Thread which discussed on the similar related issue.
And also try how the RBAC works on Azure storage.
Only roles explicitly defined for data access permit a security principal to access blob or queue data. Roles such as Owner, Contributor, and Storage Account Contributor permit a security principal to manage a storage account, but do not provide access to the blob or queue data within that account.
You can grant the right to create a user delegation key separately from right to the data.
https://learn.microsoft.com/en-us/rest/api/storageservices/get-user-delegation-key is performed at the account level, so you must give this permission with something like the Storage Blob Delegator built-in role at the scope of the storage account.
You can then grant just the data permissions the user should have, using one of these 3 built-in roles at the scope of the blob container:
Storage Blob Data Contributor
Storage Blob Data Owner
Storage Blob Data Reader
The User Delegation Token can then be generated to grant a subset of the users permissions for a limited time, and can be granted for an entire blob container OR for individual blobs.
For more details you may check this thread.
And You have to use VNet rules in the storage firewall or trusted access to storage to restrict access for clients in the same region.
you may check with this links.
https://learn.microsoft.com/en-us/azure/storage/common/storage-sas-overview https://learn.microsoft.com/en-us/rest/api/storageservices/create-service-sas#permissions-for-a-blob

Provide access to a third party app using Azure AD

I've installed a third party app on an AWS EC2 instance. The requirement is when user clicks the web url of this application, user should be authenticated using organization's Azure AD. Since it's a third party app I can not integrate Azure AD with it in the code. Any suggestions on how it can be achieved are welcome. I'm trying out AWS cognito service but so far it didn't work.
Please check if you have followed the steps below and if anything was missed.
Version of Azure AD- free won’t support the onboarding of enterprise apps. So we need to upgrade Azure AD.
Go to enterprise application>new application>non-gallery
application>activate for enterprise account (which is minimum
requirement ,can select premium also)>give AWS app name.
Go to single sign-on by opening the application in azure >choose the
SAML option >Download federation metadata XML as shown below.
Then go to AWS management console>>Enable AWS SSO(Only certain
regions are available to enable SSO,please check that).
Choose the identity source
Change the identity provider>>select external identity
provider>download AWS SSO SAML metadata file which can be used later
in azure side.
In IdP SAML metadata>insert the azure federation metadata file which
is downloaded previously from azure and then review and confirm .
Now go to azure portal where you previously previously created aws app name>Go to single sign on >Upload metadata file>select the file which we previously downloaded from the aws portal>click on add>then click save on basic SAML configuration.
Say yes to test sso if pop up for testing appears.
Now we can provide automatic provisioning.When new user is created in azure AD ,then it must flow in AWS SSO .We can make few users a part of AD group in order to try signin from users.
Now go to AWS Portal and click on >Enable automatic provisioning.Copy
SCIM Endpoint and access token .Go to azure side in the app
provisioning>>Select automatic in provisioning mode>>Then paste the the SCIM end point in Tenant URL and accesstoken>click on Test connection and save the configuration.
Then go for mappings >select Synchronize AAD users to custom app
sso>leave default settings>You can select required attributes
-select beside externalID mailnickname and change the Source attribute to ObjectId(choosing the unique ID on AD side to flow in
AWS)>Also edit mail>change source attribute to userprincipalname.
I. Ensure the user only has one value for phoneNumber/email
II. Remove the duplicate attributes. For example, having two different
attributes being mapped from Azure AD both mapped to
"phoneNumber_____" would result in the error if both attributes in
Azure AD have values. Only having one attribute mapped to a
"phoneNumber____ " attribute would resolve the error.
Now go ahead and map users and groups
Search for groups in portal and add groups >Security type>give a
group name ,description and membership type as assigned>click on create.
Create two or more groups in the same way if needed ,After that
these groups are to be filled with particular users for particular
group .
Now create few users .For that Search for users in portal>new user>give name >add the user to one of the created groups and assign .
After creating users and groups , go to users and groups in your
enterprise app(recommended to select groups rather than individual
and then delete unwanted users)
Go back to provisioning and make the provision status as ON.
Now do the mapping of AD group to access certain AWS accounts by
giving permission sets.
Go to permission sets and select the group or users . You can give
existing job functional access or you can create custom policies .
Now go to settings in AWS portal copy the url and open the page of
the url which redirects to the signin. Give the user credentials and
access is possibleas per the given permissions.

How to retrieve the members of a google cloud project and their permissions?

Is it possible to retrieve the members of a google cloud project and their permissions via an api? I'm doing this because I want to see if the current signed in user of my app has permission to access the cloud storage section of the project.
The user is authenticated via OAuth2. I interact with cloud storage through an Amazon s3 library that has the endpoint changed.
You can secure the page so that only admins can access it or check programmably.
from google.appengine.api import users
user = users.get_current_user()
if user:
print 'Welcome, %s!' % user.nickname()
if users.is_current_user_admin():
print 'Go to admin area'

Resources