I am trying to copy an object from Phoenix region to Ashburn . The admin for the tenant still unable to perform this action . Am I missing any privileges?
I am seeing an error in the Work Request The Service Cannot Access the Source Bucket
Do I need to add additional policy statements?
Yes, the service needs access too.
You can refer to the documentation here, specifically:
Service Permissions
To enable object copy, you must authorize the service to manage objects on your behalf. To do so, create the
following policy:
allow service objectstorage-<region_name> to manage object-family in
compartment <compartment_name>
Because Object Storage is a
regional service, you must authorize the Object Storage service for
each region that will be carrying out copy operations on your behalf.
For example, you might authorize the Object Storage service in region
us-ashburn-1 to manage objects on your behalf. Once you do this, you
will be able to initiate the copy of an object stored in a
us-ashburn-1 bucket to a bucket in any other region, assuming that
your user account has the required permissions to manage objects
within the source and destination buckets.
Related
I have the following topics which are created on IBM MQ docker version:
dev/test/sys1 dev/test/sys2
I am trying to create subscriber from XMS .NET APIs using the following code:
destination = sessionWMQ.CreateTopic("dev/test/#");
The following exception appears, it is related to permissions based "Reason: 2035", but I am not able to figure out what is the permission I have to grant and from where
XMSException caught: IBM.XMS.IllegalStateException: Failed to
subscribe to topic dev/# using MQSUB. There may have been a problem
creating the subscription due to it being used by another message
consumer. Make sure any message consumers using this subscription are
closed before trying to create a new subscription under the same name.
Please see the linked exception for more information.
If you receive an error 2035 (MQRC_NOT_AUTHORIZED) there will be a corresponding message in the queue manager error log AMQERR01.LOG. It will say something like this:-
AMQ8009: Entity 'mqgusr1' has insufficient authority to access topic string
'dev/test/#'.
EXPLANATION:
The specified entity is not authorized to access the required topic. The
following requested permissions are unauthorized: sub
ACTION:
Ensure that the correct level of authority has been set for this entity against
appropriate topic objects, or ensure that the entity is a member of a privileged
group.
Specifically this error message will tell you the user id, the object name, and the missing authorization. Using this information, you can almost construct the command you need. You do need one more peice of information and that is the group name that the user is in that you wish to grant the authority on. It is always recommended that you use group names and not user names when granting authorities or you may end up with too many authorities to manage, or perhaps worse, more users than you expected gaining the authority you granted as a result of the primary group of the user being something like 'staff'.
Here's the command, assuming that 'mqgusr1' in my error message, is in the group 'mqgapp' and that group is suitable for being granted authority to subscribe to a topic.
SET AUTHREC PROFILE(SYSTEM.BASE.TOPIC) OBJTYPE(TOPIC) GROUP('mqgapp') AUTHADD(SUB)
It is worth mentioning at this point that adding topic related authorities to the SYSTEM.BASE.TOPIC results in the group in question being able to use any topic available - this object represents the root of the topic tree. If you wish to restrict access to only certain parts of the topic tree (recommended), then you should instead create a topic object for the section of the topic tree you want to use, and then grant authority there instead, thus the following commands:
SET AUTHREC PROFILE(SYSTEM.BASE.TOPIC) OBJTYPE(TOPIC) GROUP('mqgapp') AUTHRMV(SUB)
DEFINE TOPIC(DEV.TEST) TOPICSTR('dev/test')
SET AUTHREC PROFILE(DEV.TEST) OBJTYPE(TOPIC) GROUP('mqgapp') AUTHADD(SUB)
I want to trigger a data factory pipeline when a blob is created.
Following the article here, I am getting an error.
I have enabled EventGrid and DataFactory on the subscription. The data factory is set to Owner role for the entire storage account (using the new RBAC interface where you dont enter the managed object id, you just browse the names of your resources). Nothing has been done to the container level security. For storage account settings, under Firewalls and Virtual Networks, it is set to allow access from All Networks. I don't have any vnets or other virtual networking constructs defined in the resource group.
In the same data factory, I can create data sets that point to the same container, and can test connection and browse data just fine. But when trying to create the trigger, I am seeing this:
"Unable to list files and show data preview (...)"
error in data preview pane
I am the person who created the resource group and all of these resources. All resources are created in the same region (US Central.) Everything has been done through the Azure portal.
I am attempting to use the Microsoft Azure Storage Explorer, attaching with a SAS URI. But I always get the error:
Inadequate resource type access. At least service-level ('s') access
is required.
Here is my SAS URI with portions obfuscated:
https://ti<...>hare.blob.core.windows.net/?sv=2018-03-28&ss=b&srt=co&sp=rwdl&se=2027-07-01T00:00:00Z&st=2019-07-01T00:00:00Z&sip=52.<...>.235&spr=https&sig=yD%2FRUD<...>U0%3D
And here is my connection string with portions obfuscated:
BlobEndpoint=https://tidi<...>are.blob.core.windows.net/;QueueEndpoint=https://tidi<...>hare.queue.core.windows.net/;FileEndpoint=https://ti<...>are.file.core.windows.net/;TableEndpoint=https://tid<...>hare.table.core.windows.net/;SharedAccessSignature=sv=2018-03-28&ss=b&srt=co&sp=rwdl&se=2027-07-01T00:00:00Z&st=2019-07-01T00:00:00Z&sip=52.<...>.235&spr=https&sig=yD%2FRU<...>YU0%3D
It seems like the problem is with the construction of my URI/endpoints/connectionstring/etc, more than with permissions granted me on the server, due to the fact that when I click Next, the error displays instantaneously. I do not believe it even tried to reach out to the server.
What am I doing wrong? (As soon as I get this working, I'll be using the URI/etc to embed in my C# app for programmatic access.)
What you need to connect is a service requirement the "SRT" part of the URI.
The URI you have has a SRT of "CO" container and object and needs the "S" part, you need to create a new sas key this can be generated in portal, azure cli or powershell.
In the portal is this part:
You have to enter to the storage acount and select what you need:
Allowed services (if you are looking for blob)
Blob
Allowed resource types
Service (make sure this one is activated)
Container
Object
Allowed permissions (this to do everything)
Read
Write
Delete
List
Add
Create
Example where to look
If you need more info look here:
https://learn.microsoft.com/en-us/rest/api/storageservices/create-account-sas?redirectedfrom=MSDN
If you like to create the SAS key in the CLI use this:
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blob-user-delegation-sas-create-cli
If you like to create the SAS key in powershell use this:
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blob-user-delegation-sas-create-powershell
I has a similar issue trying to connect to the blob container using a Shared Access Signature (SAS) URL, and this worked for me:
Instead of generating the SAS URL in Azure Portal, I used Azure Storage Explorer.
Right click the container that you want to share -> "Get Shared Access Signature"
Select the Expiry time and permissions and click create
This URL should work when your client/user will try to connect to the container.
Cheers
I had the same problem and managed to get this to work by hacking the URL and changing "srt=co" to "srt=sco". It seems to need the "s".
We are using IAM permissions for groups and users with great success for S3, SQS, Redshift, etc. The IAM for S3 in particular gives lovely level of details by path and bucket.
I am bumping into some head scratching when it comes to EC2 permissions.
How do I create a permission that allows an IAM user to:
create up to n instances
do whatever he/she wants on those instances only (terminate / stop / describe)
...and makes it impossible for him/her to affect our other instances (change termination / terminate / etc.) ?
I've been trying Conditions on tag ("Condition": {"StringEquals": {"ec2:ResourceTag/purpose": "test"}}), but that means that all of our tools need to be modified to add that tag at creation time.
Is there a simpler way?
Limiting the number of instances an IAM user can create is not possible (unfortunately). All you have is a limit on the number of instances in the entire account.
Limiting permissions to specific instances is possible, but you have to specify the permissions for each instance-ID, using this format:
arn:aws:ec2:region:account:instance/instance-id
More information is available here:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-supported-iam-actions-resources.html
I've tried to use DescribeInstances but I've found the response result does not contain user data. Is there any way to retrieve this user data?
My usage case is I've trying to request spot instances and assign different user data to each EC2 instance for some kind of automation and then I want to tag the name of each instance according to this user data. Based on my understanding, creating a tag request requires InstanceId, which is not available at the time when I make a request to reserve a spot instance.
So I'm wondering whether there is any way to get the user data of a running instance instead of SSHing the instance...
The DescribeInstanceAttributes endpoint will provide you with user data.
http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-DescribeInstanceAttribute.html