Cannot trigger Azure data factory pipeline on blob creation - azure-blob-storage

I want to trigger a data factory pipeline when a blob is created.
Following the article here, I am getting an error.
I have enabled EventGrid and DataFactory on the subscription. The data factory is set to Owner role for the entire storage account (using the new RBAC interface where you dont enter the managed object id, you just browse the names of your resources). Nothing has been done to the container level security. For storage account settings, under Firewalls and Virtual Networks, it is set to allow access from All Networks. I don't have any vnets or other virtual networking constructs defined in the resource group.
In the same data factory, I can create data sets that point to the same container, and can test connection and browse data just fine. But when trying to create the trigger, I am seeing this:
"Unable to list files and show data preview (...)"
error in data preview pane
I am the person who created the resource group and all of these resources. All resources are created in the same region (US Central.) Everything has been done through the Azure portal.

Related

SingleInstance() not working only on cluster

I am building a solution that implements a RESTful service for interacting with metadata related to federated identity.
I have a class that is registered with Autofac like this:
builder.RegisterType<ExternalIdpStore>()
.As<IExternalIdpStore>()
.As<IStartable>()
.SingleInstance();
I have a service class (FedApiExtIdpSvc) that implements a service that is a dependency of an ASP.NET controller class. That service class has this IExternalIdpStore as a dependency. When I build and run my application from Visual Studio (which is in Debug mode), I get one instance of ExternalIdpStore injected, it's constructor only executes once. When I initiate a controller action that ends up calling a particular method of my ExternalIdpStore class, it works just fine.
When my application is built via Azure DevOps (which is in Release mode), and deployed to a Kubernetes cluster running under Linux, I initially see one call to the ExternalIdpStore class' constructor right at application startup. When I initiate the same controller action as above, I see another call to the ExternalIdpStore's constructor, and when the same method of the class is called, it fails because the data store hasn't been initialized (it's initialized from calling the class' Start method that implementes IStartable).
I have added a field to the class that gets initialized in the constructor to a GUID so I can confirm that I have two different instances when on cluster. I log this value in the constructor, in the Startup code, and in the method eventually called when the controller action is initiated. Logging is confirming that when I run from Visual Studio under Windows, there is just one instance, and the same GUID is logged in all three places. When it runs on cluster under Linux, logging confirms that the first two log entries reference the same GUID, but the log entry from the method called when the controller action is initiated shows a different GUID, and that a key object reference needed to access the data store is null.
One of my colleagues thought that I might have more than one registration. So I removed the explicit registration I showed above. The dependency failed to resolve when tested.
I am at a loss as to what to try next, or how I might add some additional logging to diagnose what is going on.
So here's what was going on:
The reason for getting two sets of log entries was that we have two Kubernetes clusters sending log entries to Splunk. This service was deployed to both. The sets of log entries were coming from pods in different clusters.
My code was creating a Cosmos DB account client, and was not setting the connection mode, so it was defaulting to direct.
The log entries that showed successful execution were for the cluster running in Azure - in Azure Kubernetes Service (AKS). Accessing the Cosmos DB account from AKS in direct connection mode was succeeding.
The log entries that were failing were running in our on-prem Kubernetes cluster. Attempting to connect to the Cosmos DB account was failing because it's on our corporate network which has security restrictions that were preventing direct connection mode from working.
The exception thrown when attempting to connect from our on-prem cluster was essentially "lost" because it was from a process running on a background thread.
modifying the logic to add a try-catch around the attempt to connect, and passing the exception back to the caller allowed logging the exception related to direct connection mode failing.
Biggest lesson learned: When something "strange" or "odd" or "mysterious" or "unusual" is happening, start looking at your code from the perspective of where it could be throwing an exception that isn't caught - especially if you have background processes!

Azure Storage Explorer - Inadequate resource type access

I am attempting to use the Microsoft Azure Storage Explorer, attaching with a SAS URI. But I always get the error:
Inadequate resource type access. At least service-level ('s') access
is required.
Here is my SAS URI with portions obfuscated:
https://ti<...>hare.blob.core.windows.net/?sv=2018-03-28&ss=b&srt=co&sp=rwdl&se=2027-07-01T00:00:00Z&st=2019-07-01T00:00:00Z&sip=52.<...>.235&spr=https&sig=yD%2FRUD<...>U0%3D
And here is my connection string with portions obfuscated:
BlobEndpoint=https://tidi<...>are.blob.core.windows.net/;QueueEndpoint=https://tidi<...>hare.queue.core.windows.net/;FileEndpoint=https://ti<...>are.file.core.windows.net/;TableEndpoint=https://tid<...>hare.table.core.windows.net/;SharedAccessSignature=sv=2018-03-28&ss=b&srt=co&sp=rwdl&se=2027-07-01T00:00:00Z&st=2019-07-01T00:00:00Z&sip=52.<...>.235&spr=https&sig=yD%2FRU<...>YU0%3D
It seems like the problem is with the construction of my URI/endpoints/connectionstring/etc, more than with permissions granted me on the server, due to the fact that when I click Next, the error displays instantaneously. I do not believe it even tried to reach out to the server.
What am I doing wrong? (As soon as I get this working, I'll be using the URI/etc to embed in my C# app for programmatic access.)
What you need to connect is a service requirement the "SRT" part of the URI.
The URI you have has a SRT of "CO" container and object and needs the "S" part, you need to create a new sas key this can be generated in portal, azure cli or powershell.
In the portal is this part:
You have to enter to the storage acount and select what you need:
Allowed services (if you are looking for blob)
Blob
Allowed resource types
Service (make sure this one is activated)
Container
Object
Allowed permissions (this to do everything)
Read
Write
Delete
List
Add
Create
Example where to look
If you need more info look here:
https://learn.microsoft.com/en-us/rest/api/storageservices/create-account-sas?redirectedfrom=MSDN
If you like to create the SAS key in the CLI use this:
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blob-user-delegation-sas-create-cli
If you like to create the SAS key in powershell use this:
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blob-user-delegation-sas-create-powershell
I has a similar issue trying to connect to the blob container using a Shared Access Signature (SAS) URL, and this worked for me:
Instead of generating the SAS URL in Azure Portal, I used Azure Storage Explorer.
Right click the container that you want to share -> "Get Shared Access Signature"
Select the Expiry time and permissions and click create
This URL should work when your client/user will try to connect to the container.
Cheers
I had the same problem and managed to get this to work by hacking the URL and changing "srt=co" to "srt=sco". It seems to need the "s".

OCI ObjectStorage required privilege for CopyObject?

I am trying to copy an object from Phoenix region to Ashburn . The admin for the tenant still unable to perform this action . Am I missing any privileges?
I am seeing an error in the Work Request The Service Cannot Access the Source Bucket
Do I need to add additional policy statements?
Yes, the service needs access too.
You can refer to the documentation here, specifically:
Service Permissions
To enable object copy, you must authorize the service to manage objects on your behalf. To do so, create the
following policy:
allow service objectstorage-<region_name> to manage object-family in
compartment <compartment_name>
Because Object Storage is a
regional service, you must authorize the Object Storage service for
each region that will be carrying out copy operations on your behalf.
For example, you might authorize the Object Storage service in region
us-ashburn-1 to manage objects on your behalf. Once you do this, you
will be able to initiate the copy of an object stored in a
us-ashburn-1 bucket to a bucket in any other region, assuming that
your user account has the required permissions to manage objects
within the source and destination buckets.

How to use snapshot and caching functions without actually storing credentials in SSRS

I have developed few testing reports in my local machine. I came across few mechanisms called Snapshot and Caching. I am trying to implement those in my reports, every time when i try to create Caching mechanism it throws me an error "credentials need to be stored. "
Can we use caching and snapshot by using Windows credentials?. if so what is the approach.
My local machine details.
Serername-(loacl)
Authentication -- (Windows)
name and pwd-- gryed out
my reportserver URL: satish-pc/reportserver
DB-Adventure Works.
Scheduled snapshots or caching plans mean the report is being executed on an automated basis, and the results stored for easier/faster retrieval later. As the executions are autmoated and unattended, they need connections with stored credentials, as there is no user sitting at the computer at run-time to punch in credentials. So, in order to use snapshots or scheduled caches, you will need to create a data source that has credentials stored in it. In Report Manager, you can edit the Report's datasource in the Report Properties page, or the shared datasource's connection into on its own properties page.

FileNetP8 File/move document from Object Store to Case folder

I'm trying to dynamically move a document that is in my CMTOS Object Store in my Content Engine To a Case folder in a solution in IBM Advance Case Manager.
The document is transfered trough a web service from a distant server to the CMTOS Object Store.
I heard about subscription ... I mean, creating a Document class, creating a subscription (in the Content Engine Manager), and then open IBM Process Designer through the Workplace XT and add attachment in my workflow properties, but it doesn't seems to work.
It's been a few days that i started to search on google, redbooks, ECM PLace and IBM developerWorks, but i havn't found any procedure to do that....
filenet events and subscriptions
The above link contains the events and subscription how to , you have to create subscription for creation event on the class you have to file into the case folder . Then CE will take care of everything else no need for process designer and work flows
This is an information on case manager subscription. When you deploy a case manager solution using case manager. It automatically creates subscriptions in case manager object stores which in turns takes care of filing the case documents to case folder. No need to create additional subscription for case documents filing.

Resources