In Visual studio when publishing a worker role you need to select an storage account.
In this image I have already created and selected a storage account, but my application doesn't need it, if I delete the storage account visual studio won't let me publish the worker role unless I create and select a storage account.
Is there a way to publish a worker role without having a storage account?
Is there a way to publish a worker role without having a storage
account?
No. You would need a storage account to publish a worker role.
The reason for this is that a worker role deployment needs a storage account where the package file (*.cspkg) will be stored. The deployment process deploys the package file from that storage account. Once the deployment succeeds, you can delete that storage account.
Other than that, (as mentioned by Aravind in his answer) you would need a storage account if you have enabled diagnostics for your worker role as the diagnostics data for that role is stored in a storage account.
In azure cloud services the diagnostics will be stored in a WADlogs table ( table storage) in the storage account used with the cloud service. You can use some storage explorer to browse through the logs.
also it will be useful if you need to use some storage.Linked resources are easier and one does not have to remember which storage account is used in which service etc.
Related
I set up a Storage Container (Blob) and a Role Assignement (Storage Account Contributor) to a App Registration with a client Secret-> so I can query the blob files in a runbook as a service principal. So far so fine. App Registartion has API Permission to Azure Storage and it run fine.
I then wanted to check my error-handling and output of the runbook when permissions are missing and removed the API Permission to Azure Storage on the App Registration. And nothing changed at all...The runbook succesfully created storage context and down-/uploaded the file without problem.
After some digging, I noticed that the object-id of the App Registration is different when I look at it in Access Control (IAM) of the storage container than when I load the object in Azure Active Directory (see pic below). So I thought well there must be some "noise" and removed and re-added the Role Assignement to the container. I then run into the error as expected.
After successfully worked on my error-handling i re-applied the permissions and...the error wont disapear. So I again looked at the objects and again...die object-ids where different. I had to remove the RBAC and re-add it to reflect the permission change. After re-adding still the same issue. I have different ID's.
Does anyone know why thats different? And why wont it reflect the permission change withour remove-re-add?
Thank you!
Storage Container vs AAD:
Azure AD app registration is backed by 2 directory objects: an application and a service principal. As its name implies, the latter is the principal for authentication/authorization. Thus, you will see 2 object ids.
Regarding the access issue, all the principal (user or service) needs is an RBAC role assigned, thus adding or removing application permissions won't make a difference.
I've been going through Azure blob documentation and Storage Analytics Logging but I don't see how to enable logging on user level. So, there is company Azure plan, each employee logs in with own credentials, but when we use Azure Storage Explorer (desktop version) I don't see anywhere logged in which user uploaded, deleted... which files/folders to the blob. What did we miss to do here?
https://learn.microsoft.com/en-us/rest/api/storageservices/storage-analytics-log-format#log-entry-format-20
In blob log format version 2.0 only, we have UserPricipalName.
I did tried, While the field is a uuid. Which is not user friendly.
Just for your reference.
I'm trying to apply the 'Application Administrator'role to a service principal to allow it to create other service principals in AD. I would have assumed that having the ability to manage all aspects of app registrations etc as explained in the docs here: https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/active-directory/users-groups-roles/directory-assign-admin-roles.md would have allowed me to do this but i still cannot create new service principals in this way?
It looks as if it has created when looking in AD App Registrations but errors out with insufficient privileges
I have tried several approaches through bash & powershell, trying to create the AD application first then creating a service principal from that application id, also tried with the 'Global Admin' role and that works as expected however we're trying to limit as much as possible.
The command i'm trying to run in bash is
az ad sp create-for-rbac -n $spn_name --skip-assignment
And the equivalent in powershell
New-AzAdServicePrincipal -ApplicationId $appid
From an SPN with only the 'Application Administrator' role assigned.
Creating service principal failed for appid 'http://test-spn1'. Trace followed:
{Trace JSON}
Insufficient privileges to complete the operation.
To grant an application the ability to create, edit and delete all aspects of apps (both Application objects and ServicePrincipal objects, represented in the portal under App Registrations and Enterprise Apps, respectively), you should consider the following two app-only permissions (instead of the directory role):
Application.ReadWrite.All - Create Application and ServicePrincipal objects and manage any Application and ServicePrincipal objects.
Application.ReadWrite.OwnedBy - Create Application and ServicePrincipal objects (and automatically get set as owner), and manage Application and ServicePrincipal objects it is owner of (either because it created them, or because it was assigned as an owner).
These permissions are pretty close to what the Application Administrator directory role allows for users. They're available for both Azure AD Graph API (which is the API used by the Azure CLI, the Azure AD PowerShell module (AzureAD), and the Azure PowerShell module (Az)), and Microsoft Graph API (which you should not use for production scenarios, as the application and servicePrincipal entitles are still in beta). The permissions are documented here:
* https://learn.microsoft.com/graph/permissions-reference#application-resource-permissions
Warning: Both of these permissions are very high privilege. By being able to manage Application and ServicePrincipal objects, they can add credentials for those objects (keyCredentials and passwordCredentials) and in doing so, exercise any access which has been granted to those other apps. If an app granted Application.ReadWrite.All is compromised, pretty much all apps are compromised.
I want to backup my AWS snapshots to a completely separate AWS account for additional security (if my AWS credentials were acquired someone could delete all my snapshots and volumes). But I'm a bit stumped on how to do this.
There doesn't seem to be a way to store a volume or snapshot in S3 such that another user could access that data in s3 and store it in a separate AWS account.
Does anyone have any suggestions on how to acheive this?
Thanks
Create an IAM user and an S3 bucket from your secret (backup)
account.
Add an IAM policy to the newly created bucket,
allowing your newly created IAM user to put objects, but denying him
to delete them.
Use IAM user account to upload your backups to S3.
You can share any EBS snapshot with another account by adding this permission. Once the snapshot is shared, the other account can either copy that snapshot to their account or create a volume using that snapshot.
I want to setup a Windows Azure development storage on my dev machine but I don't want to install SQL server on it because I want to use an existing one on another machine. Is it possible to set up the development storage service so that it uses the SQL server from another machine?
I tried calling dsinit with the /sqlinstance argument set to the remote machine, but it doesn't have any argument to allow me to specify the login credentials.
You can use undocumented command line argument /server:.
Example: dsinit /server:remote-sql-name
Added reservation for http://127.0.0.1:10000/ in user account DOMAIN\username.
Added reservation for http://127.0.0.1:10001/ in user account DOMAIN\username.
Added reservation for http://127.0.0.1:10002/ in user account DOMAIN\username.
Creating database DevelopmentStorageDb20110816...
Granting database access to user DOMAIN\username...
The login already has an account under a different user name.
Changed database context to 'DevelopmentStorageDb20110816'.
Adding database role for user DOMAIN\username...
User or role 'user' does not exist in this database.
Changed database context to 'DevelopmentStorageDb20110816'.
Initialization successful. The storage emulator is now ready for use.
I think the short answer is no. Certainly dsinit is designed to only work on your local machine.
Can you setup the remote database server to use windows authentication and add the currently logged in user as an admin on that server? That may be enough to fool it (but I wouldn't hold my breath)
If this doesn't work and you still don't want to SQL on your development machine, then using the actual Azure storage is not a bad idea. It does cost some money, but not much and it does avoid some of the kinks that occur only in development storage.