Windows azure cache error - "Cache referred to does not exist. Contact administrator or use the Cache administration tool to create a Cache." - windows

I have one application on Windows Azure cloud and I'm using Windows Azure Co-Located Cache.
Some times, when I publish the website/webservice, this error appears when I call the DataCacheFactory.GetCache method:
Cache referred to does not exist. Contact administrator or use the Cache administration tool to create a Cache.
This problem can go away after few moments, but some times it never fix, then I need to publish projects again.
The stacktrace is:
Microsoft.ApplicationServer.Caching.DataCache.ThrowException(ErrStatus errStatus, Guid trackingId, Exception responseException, Byte[][] payload,
EndpointID destination) at Microsoft.ApplicationServer.Caching.DataCacheFactory.EstablishConnection(IEnumerable`1 servers, RequestBody request, Func`3
sendMessageDelegate, DataCacheReadyRetryPolicy retryPolicy) at Microsoft.ApplicationServer.Caching.SocketClientProtocol.Initialize(IEnumerable`1 servers)
at Microsoft.ApplicationServer.Caching.DataCacheFactory.GetCache(String cacheName, CreateNewCacheDelegate cacheCreationDelegate,
DataCacheInitializationViaCopyDelegate initializeDelegate)

See this link whether it can help you...
http://www.windowsazure.com/en-us/develop/net/how-to-guides/cache/#comment-743576866
we were missing the required blob storage container on local
devstorage. After creating the following container :
'cacheclusterconfigs' everything seems to be working now
'cacheclusterconfigs' container will be created by the service internally.you may accidentally deleted that.
Note: IMO please verify the cache name. By default you will be using the cache named 'default'.

Related

SingleInstance() not working only on cluster

I am building a solution that implements a RESTful service for interacting with metadata related to federated identity.
I have a class that is registered with Autofac like this:
builder.RegisterType<ExternalIdpStore>()
.As<IExternalIdpStore>()
.As<IStartable>()
.SingleInstance();
I have a service class (FedApiExtIdpSvc) that implements a service that is a dependency of an ASP.NET controller class. That service class has this IExternalIdpStore as a dependency. When I build and run my application from Visual Studio (which is in Debug mode), I get one instance of ExternalIdpStore injected, it's constructor only executes once. When I initiate a controller action that ends up calling a particular method of my ExternalIdpStore class, it works just fine.
When my application is built via Azure DevOps (which is in Release mode), and deployed to a Kubernetes cluster running under Linux, I initially see one call to the ExternalIdpStore class' constructor right at application startup. When I initiate the same controller action as above, I see another call to the ExternalIdpStore's constructor, and when the same method of the class is called, it fails because the data store hasn't been initialized (it's initialized from calling the class' Start method that implementes IStartable).
I have added a field to the class that gets initialized in the constructor to a GUID so I can confirm that I have two different instances when on cluster. I log this value in the constructor, in the Startup code, and in the method eventually called when the controller action is initiated. Logging is confirming that when I run from Visual Studio under Windows, there is just one instance, and the same GUID is logged in all three places. When it runs on cluster under Linux, logging confirms that the first two log entries reference the same GUID, but the log entry from the method called when the controller action is initiated shows a different GUID, and that a key object reference needed to access the data store is null.
One of my colleagues thought that I might have more than one registration. So I removed the explicit registration I showed above. The dependency failed to resolve when tested.
I am at a loss as to what to try next, or how I might add some additional logging to diagnose what is going on.
So here's what was going on:
The reason for getting two sets of log entries was that we have two Kubernetes clusters sending log entries to Splunk. This service was deployed to both. The sets of log entries were coming from pods in different clusters.
My code was creating a Cosmos DB account client, and was not setting the connection mode, so it was defaulting to direct.
The log entries that showed successful execution were for the cluster running in Azure - in Azure Kubernetes Service (AKS). Accessing the Cosmos DB account from AKS in direct connection mode was succeeding.
The log entries that were failing were running in our on-prem Kubernetes cluster. Attempting to connect to the Cosmos DB account was failing because it's on our corporate network which has security restrictions that were preventing direct connection mode from working.
The exception thrown when attempting to connect from our on-prem cluster was essentially "lost" because it was from a process running on a background thread.
modifying the logic to add a try-catch around the attempt to connect, and passing the exception back to the caller allowed logging the exception related to direct connection mode failing.
Biggest lesson learned: When something "strange" or "odd" or "mysterious" or "unusual" is happening, start looking at your code from the perspective of where it could be throwing an exception that isn't caught - especially if you have background processes!

Azure Storage Explorer - Inadequate resource type access

I am attempting to use the Microsoft Azure Storage Explorer, attaching with a SAS URI. But I always get the error:
Inadequate resource type access. At least service-level ('s') access
is required.
Here is my SAS URI with portions obfuscated:
https://ti<...>hare.blob.core.windows.net/?sv=2018-03-28&ss=b&srt=co&sp=rwdl&se=2027-07-01T00:00:00Z&st=2019-07-01T00:00:00Z&sip=52.<...>.235&spr=https&sig=yD%2FRUD<...>U0%3D
And here is my connection string with portions obfuscated:
BlobEndpoint=https://tidi<...>are.blob.core.windows.net/;QueueEndpoint=https://tidi<...>hare.queue.core.windows.net/;FileEndpoint=https://ti<...>are.file.core.windows.net/;TableEndpoint=https://tid<...>hare.table.core.windows.net/;SharedAccessSignature=sv=2018-03-28&ss=b&srt=co&sp=rwdl&se=2027-07-01T00:00:00Z&st=2019-07-01T00:00:00Z&sip=52.<...>.235&spr=https&sig=yD%2FRU<...>YU0%3D
It seems like the problem is with the construction of my URI/endpoints/connectionstring/etc, more than with permissions granted me on the server, due to the fact that when I click Next, the error displays instantaneously. I do not believe it even tried to reach out to the server.
What am I doing wrong? (As soon as I get this working, I'll be using the URI/etc to embed in my C# app for programmatic access.)
What you need to connect is a service requirement the "SRT" part of the URI.
The URI you have has a SRT of "CO" container and object and needs the "S" part, you need to create a new sas key this can be generated in portal, azure cli or powershell.
In the portal is this part:
You have to enter to the storage acount and select what you need:
Allowed services (if you are looking for blob)
Blob
Allowed resource types
Service (make sure this one is activated)
Container
Object
Allowed permissions (this to do everything)
Read
Write
Delete
List
Add
Create
Example where to look
If you need more info look here:
https://learn.microsoft.com/en-us/rest/api/storageservices/create-account-sas?redirectedfrom=MSDN
If you like to create the SAS key in the CLI use this:
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blob-user-delegation-sas-create-cli
If you like to create the SAS key in powershell use this:
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blob-user-delegation-sas-create-powershell
I has a similar issue trying to connect to the blob container using a Shared Access Signature (SAS) URL, and this worked for me:
Instead of generating the SAS URL in Azure Portal, I used Azure Storage Explorer.
Right click the container that you want to share -> "Get Shared Access Signature"
Select the Expiry time and permissions and click create
This URL should work when your client/user will try to connect to the container.
Cheers
I had the same problem and managed to get this to work by hacking the URL and changing "srt=co" to "srt=sco". It seems to need the "s".

How to use snapshot and caching functions without actually storing credentials in SSRS

I have developed few testing reports in my local machine. I came across few mechanisms called Snapshot and Caching. I am trying to implement those in my reports, every time when i try to create Caching mechanism it throws me an error "credentials need to be stored. "
Can we use caching and snapshot by using Windows credentials?. if so what is the approach.
My local machine details.
Serername-(loacl)
Authentication -- (Windows)
name and pwd-- gryed out
my reportserver URL: satish-pc/reportserver
DB-Adventure Works.
Scheduled snapshots or caching plans mean the report is being executed on an automated basis, and the results stored for easier/faster retrieval later. As the executions are autmoated and unattended, they need connections with stored credentials, as there is no user sitting at the computer at run-time to punch in credentials. So, in order to use snapshots or scheduled caches, you will need to create a data source that has credentials stored in it. In Report Manager, you can edit the Report's datasource in the Report Properties page, or the shared datasource's connection into on its own properties page.

MVC3 site deployed on IIS6 stops working after 20 minuttes with 404 Not Found

I'll try to make this short, feel free to ask for more details.
A mobile edition a a web-site has been created using MV3 razor and deployed to an IIS6 web-server using extenstionless URL's. Since .NET4 is installed on the server there is no special configuration done on the server to get extensionless urls work. When I try to access the site with the URL: http://site/m/ i get a 404 Not Found error.
What I do to produce this problem:
Right-click on project in VS2010 and publish to local file system.
ZIP all files in and transfer to production server + unzip there
Right click on production web-site and add a virtual directory for the new application
Create a new application pool with all default settings
Put the new virtual directory/application in that application pool
Try to access the URL in the browser; receive 404 Not Found
The thing that puzzles me, is that if I replace Step 1 with "File->Create New MVC3 Project" and then publish to local file system everything works fine:
The test-project is displayed in the browser with the name i used http://site/mvctest/
I do not need to use any extensions
It does not stop working after 20 minutes (see next paragraph)
And now for the (even) weirder part:
If I now move the "m" application into the application pool just created for the "mvctest" application; it works too. But only for 20 minutes (or whatever value I have set for "Shutdown worker process after being idle for").
Any ideas?
EDIT: If I add wildcard mapping to the /m/ virtual directory it works, but that should/could also affect performance in a bad way?
it sounds like your first scenario the handler isn't setup to handle the mvc requests. IIS 6 needs to be integrated or an extension for MVC mapped.
Set the app pool up to run in integrated pipeline mode. What happens then? This should work. Also check the event log for rapid fail protection kicking in because of worker process resets.

WindowsAzure: Is it possible to set directory permissions within the web.config?

A PHP scriptof mine wants to write into a log folder, the resulting error is:
Unable to open the log file "E:\approot\framework\log/dev.log" for writing.
When I set the writing permissions for the WebRole User RD001... manually it works fine.
Now I want to set the folder permissions automatically. Is there an easy way to get it done?
Please note that I'm very new to IIS and the stuff around, I would appreciate precise answers, thx.
Short/Technical Response:
You could probably set permissions on a particular folder using full-trust and a startup taks. However, you'd need to account for a stateless OS and changing drive letters (possible, not likely) in this script, which would make it difficult. Also, local storage is not persisted, so you'd have no way to ensure this data stayed in the case of a reboot.
Recommendation: Don't write local, read below ...
EDIT: Got to thinking about this, and while I still recommend against this, there is a 3rd option: You can allocate local storage in the service config, then access it from PHP using a dll reference, then you will have access to that folder. Please remember local storage is not persisted, so it's gone during a reboot.
Service Config for local:
http://blogs.mscommunity.net/blogs/dadamec/archive/2008/12/11/azure-reading-and-writing-with-localstorage.aspx
Accessing config from php:
http://phpazure.codeplex.com/discussions/64334?ProjectName=phpazure
Long / Detailed Response:
In Azure, you really are encouraged to approach things as a platform and not as "software on a server". What I mean there is that ideas such as "write something to a local log file" are somewhat incompatible with the cloud "idea". Depending on your usage, you could (and should) convert this script to output this data to some cloud-based or external storage, vs just placing it on the disk.
I would suggest modifying this script to leverage the PHP Azure SDK and write these log entries out to table or blob storage in Azure. If this sounds good, please provide the PHP and I can give an exact example.
The main reason for that (besides pushing the cloud idea) is that in Azure, you cannot assume the host machine ("role instance") will maintain an OS state, so while you can set some things such as folder permissions, you can't rely on them sticking that way. You have no real way to guarantee those permissions won't be reset when the fabric has to update your role and react to some lower level problem. For example, a hard-drive cage on the rack where your current instance lives could fail. If the failure were bad enough, the Fabric controller would need to rebuild your instance. When that happens, your code is moved to an entirely different server, so the need would arise to re-set those permissions. Also, depending on the changes, the E:\ could all of a sudden need to be the F:\ or X:\ drive and you wouldn't know.
Its much better to pretend (at some level) that your application is running "in Azure" and not "on a server in azure", so you make no assumptions about the hosting environment. So anything you need outside of your code (data, logs, audits, etc) should be stored somewhere you can control (Azure Storage, external call-out, etc)

Resources