I would like to prevent people with managed identity from accessing my storage.
Please help me on this matter.
To prevent users with azure AD account try to disallow all public access to storage account while disable, any future anonymous requests to that account will fail your data will never be available for public access until a user with appropriate permissions
To disallow public access for a storage account , configure the account's AllowBlobPublicAccess property :
Go to azure portal -> storage account -> setting - Find the Configuration -> Set Blob public access – Disable
After you update the public access setting for the storage account, it may take up to 30 second to change is fully propagated
To check the anonymous requests of storage account, I suggest to use below steps:
In azure portal-> under Monitoring -> click Metrics-> set the scope -> Metric Namespace as Blob -> Metric field as Transactions -> Aggregation field as Sum.
Next Add filter ->in filter ->set Authenticationin Property value -> equal sign (=) in Operator field and finally Values field as Anonymous
After configuring the metrics, the unidentified access will appear in the graph and you can also configure alert rule for notification to prevent and security purpose Create, view, and manage Metric Alerts Using Azure Monitor - Azure Monitor | Microsoft Docs
For more information in detail, please find below link:
https://learn.microsoft.com/en-us/azure/storage/blobs/anonymous-read-access-prevent
Related
I have a blobstorage where I drop files for an external partner to list the files and read them. I thought that a SAS token would be a perfect way for the external partner to access the container and read the file(s).
So I created a SAS token and realized that if I don't want to create new sas tokens every 10 minutes and send them to the partner I need to set the expire date of the token far into the future, and that is not good if the sastoken is leaked or that the day the token expire the solution will stop working.
So to fix that I could let the client create a sastoken by giving them an accesskey and accountname by using the StorageSharedKeyCredential-class. That works great, maybe to great since it's now the client that decides what permission the sas token should have. So the client might now upload files / create containers etc etc.
So my question is: Is there any way to restrict what kind of permissions the sas token have when the client create the sastoken, so our external partner only can read/list files in a specific container that I have decided.
Best Regards
Magnus
Regarding the issue, I think you want to know how to create service sas token. If so, please refer to the following code.
BlobContainerClient containerClient=new BlobContainerClient(new Uri("https://{account_name}.blob.core.windows.net/{container_name}),new StorageSharedKeyCredential());
BlobSasBuilder sasBuilder = new BlobSasBuilder()
{
BlobContainerName =containerClient.Name,
Resource = "c"
};
sasBuilder.ExpiresOn = DateTimeOffset.UtcNow.AddHours(1);
sasBuilder.SetPermissions(BlobContainerSasPermissions.Read);
sasBuilder.SetPermissions(BlobContainerSasPermissions.List);
Uri sasUri = containerClient.GenerateSasUri(sasBuilder);
To give a specific container permission, you can do this followings:
Find your container, select Access Policy under the settings blade, and click Add Policy. Select the permissions which you want to give this specific container. Also, public access level is container level.You could refer the Thread which discussed on the similar related issue.
And also try how the RBAC works on Azure storage.
Only roles explicitly defined for data access permit a security principal to access blob or queue data. Roles such as Owner, Contributor, and Storage Account Contributor permit a security principal to manage a storage account, but do not provide access to the blob or queue data within that account.
You can grant the right to create a user delegation key separately from right to the data.
https://learn.microsoft.com/en-us/rest/api/storageservices/get-user-delegation-key is performed at the account level, so you must give this permission with something like the Storage Blob Delegator built-in role at the scope of the storage account.
You can then grant just the data permissions the user should have, using one of these 3 built-in roles at the scope of the blob container:
Storage Blob Data Contributor
Storage Blob Data Owner
Storage Blob Data Reader
The User Delegation Token can then be generated to grant a subset of the users permissions for a limited time, and can be granted for an entire blob container OR for individual blobs.
For more details you may check this thread.
And You have to use VNet rules in the storage firewall or trusted access to storage to restrict access for clients in the same region.
you may check with this links.
https://learn.microsoft.com/en-us/azure/storage/common/storage-sas-overview https://learn.microsoft.com/en-us/rest/api/storageservices/create-service-sas#permissions-for-a-blob
I've installed a third party app on an AWS EC2 instance. The requirement is when user clicks the web url of this application, user should be authenticated using organization's Azure AD. Since it's a third party app I can not integrate Azure AD with it in the code. Any suggestions on how it can be achieved are welcome. I'm trying out AWS cognito service but so far it didn't work.
Please check if you have followed the steps below and if anything was missed.
Version of Azure AD- free won’t support the onboarding of enterprise apps. So we need to upgrade Azure AD.
Go to enterprise application>new application>non-gallery
application>activate for enterprise account (which is minimum
requirement ,can select premium also)>give AWS app name.
Go to single sign-on by opening the application in azure >choose the
SAML option >Download federation metadata XML as shown below.
Then go to AWS management console>>Enable AWS SSO(Only certain
regions are available to enable SSO,please check that).
Choose the identity source
Change the identity provider>>select external identity
provider>download AWS SSO SAML metadata file which can be used later
in azure side.
In IdP SAML metadata>insert the azure federation metadata file which
is downloaded previously from azure and then review and confirm .
Now go to azure portal where you previously previously created aws app name>Go to single sign on >Upload metadata file>select the file which we previously downloaded from the aws portal>click on add>then click save on basic SAML configuration.
Say yes to test sso if pop up for testing appears.
Now we can provide automatic provisioning.When new user is created in azure AD ,then it must flow in AWS SSO .We can make few users a part of AD group in order to try signin from users.
Now go to AWS Portal and click on >Enable automatic provisioning.Copy
SCIM Endpoint and access token .Go to azure side in the app
provisioning>>Select automatic in provisioning mode>>Then paste the the SCIM end point in Tenant URL and accesstoken>click on Test connection and save the configuration.
Then go for mappings >select Synchronize AAD users to custom app
sso>leave default settings>You can select required attributes
-select beside externalID mailnickname and change the Source attribute to ObjectId(choosing the unique ID on AD side to flow in
AWS)>Also edit mail>change source attribute to userprincipalname.
I. Ensure the user only has one value for phoneNumber/email
II. Remove the duplicate attributes. For example, having two different
attributes being mapped from Azure AD both mapped to
"phoneNumber_____" would result in the error if both attributes in
Azure AD have values. Only having one attribute mapped to a
"phoneNumber____ " attribute would resolve the error.
Now go ahead and map users and groups
Search for groups in portal and add groups >Security type>give a
group name ,description and membership type as assigned>click on create.
Create two or more groups in the same way if needed ,After that
these groups are to be filled with particular users for particular
group .
Now create few users .For that Search for users in portal>new user>give name >add the user to one of the created groups and assign .
After creating users and groups , go to users and groups in your
enterprise app(recommended to select groups rather than individual
and then delete unwanted users)
Go back to provisioning and make the provision status as ON.
Now do the mapping of AD group to access certain AWS accounts by
giving permission sets.
Go to permission sets and select the group or users . You can give
existing job functional access or you can create custom policies .
Now go to settings in AWS portal copy the url and open the page of
the url which redirects to the signin. Give the user credentials and
access is possibleas per the given permissions.
As per Google Assistant documentation for Smart Home, the agentUserId used in action.devices.QUERY is defined to 'Reflects the unique (and immutable) user ID on the agent's platform. The string is opaque to Google, so if there's an immutable form vs a mutable form on the agent side, use the immutable form (e.g. an account number rather than email)'
However there can be cases where the same device (with same agent user id) is attached to multiple Google Assistant accounts and in such cases a DISCONNECT request may result is ceasing report state for all accounts. The solution will be to add some unique ID corresponding to the Google Assistant account, however such information is not available in any request.
Has anyone seen similar issue and is my understanding incorrect?
The agentUserId is meant to be the user account on the smart home platform. SHP user '1234' may have a vacuum and two lights, but could be linked to multiple Google accounts.
During the account linking process, you would be expected to give a refresh and access tokens to allow for Google to have authorized control over these devices. If you assign unique access tokens for each Google account that signs in, you'd be able to determine which Google account the request is coming from.
At that point, once the user disconnects, you can use the access token in the request header to associate that with a specific Google account and only disable reporting for that account while not affecting other accounts.
So, yes the solution is to have a unique ID connecting to the account. While this is not passed in the agent ID, there is already a mechanism to make this association through the authorization system.
Alternatively, you could append a key in the agentUserId, ie. '1234-user#gmail.com'. However, this may have unintended impacts in the Home Graph. In a multi-user home, you may end up seeing the devices duplicated because Google doesn't have the right information to deduplicate.
I have a service account created through the Google developer console specifically for API access to Google Drive to retrieve documents. However recently I have changed my G-suite Google Drive settings to have the security restriction that documents can only be shared outside of my organization to whitelisted domains rather than it being wide-open for sharing purposes.
Prior to this security setting change everything was working fine having my service account access documents it has specifically been granted access to. However after the change when viewing the sharing settings on a file that it previously had access to it now says the account cannot be granted access as the policy set prohibits the sharing of items to this user as its not in a compatible whitelisted domain.
I did try whitelisting gserviceaccount.com within my G-suite admin console but this still brought no luck.
Anyone else have a similar issue? Any good solution?
Thanks!
You may want to complete the following steps given in Delegating domain-wide authority to the service account:
Go to your G Suite domain’s Admin console.
Select Security from the list of controls. If you don't see Security listed, select More controls from the gray bar at the bottom of the page, then select Security from the list of controls. If you can't see the controls, make sure you're signed in as an administrator for the domain.
Select Show more and then Advanced settings from the list of options.
Select Manage API client access in the Authentication section.
In the Client Name field enter the service account's Client ID. You can find your service account's client ID in the Service accounts page.
In the One or More API Scopes field enter the list of scopes that your application should be granted access to. For example, if your application needs domain-wide access to the Google Drive API and the Google Calendar API, enter: https://www.googleapis.com/auth/drive, https://www.googleapis.com/auth/calendar.
Click Authorize.
This will give authority to your app to make application calls as users in your domain. However, please note on this:
Although you can use service accounts in applications that run from a G Suite domain, service accounts are not members of your G Suite account and aren’t subject to domain policies set by G Suite administrators. For example, a policy set in the G Suite admin console to restrict the ability of G Suite end users to share documents outside of the domain would not apply to service accounts.
See Perform G Suite Domain-Wide Delegation of Authority for more information.
We’re currently running the Okta Active Directory agent in order to import our users into Okta.
I'd like to replace this with a custom built process that imports users into a new internal database, for other user-management-related activities, whilst also adding those users to Okta.
Creating the user in Okta is easy, but I also need to get the user's "provider" set to ACTIVE_DIRECTORY, so that Okta delegates authentication to Active Directory.
The documentation (http://developer.okta.com/docs/api/resources/users.html#provider-object) says that the User's Provider field is read-only.
How can I set it?
While you cannot directly manipulate the credential object you can leverage other features available to achieve the desired result.
Create a group in Okta and configure it as a directory provisioning group. From the designated group select 'Manage Directories' add the desired Directory and follow the wizard to completion.
Add the created users to the group (using the API)
You unfortunately cannot set this property as we do not allow the creation of Active Directory users through the public API at this point.
If the purpose of the new process is simply to enrich the user's profile, can't you not achieve this by letting the AD agent sync the users and enrich the profile directly through the API?