Azure Deployment Firewall Problems - visual-studio

I want to publish my Azure function through VS 2017.
It works fine, but after I activate the firewall on the linked Storage Account I can't deploy my project anymore.
I already checked the FW-Settings, it seems okay (no proxy etc.).
I got some ERROR_INSUFFICIENT_ACCESS_TO_SITE_FOLDER Errors.
The given advice from topic doesn't works for me.
Not able to publish website on Windows Azure using publish through VS2010.
Any other advice?

Solution:
Create a Storage Account which is not in the same region as your function app. For example, if your Function is in Central US, the Storage Account should select a different one like East US. Then modify the following three parameters(in Application settings) with new created Storage Account Connection String.
AzureWebJobsDashboard
AzureWebJobsStorage
WEBSITE_CONTENTAZUREFILECONNECTIONSTRING (only used for Consumption plan)
Configure its Firewall with Function app outbound IP addresses.
On Platform features panel of your Function app, click Resource Explorer.
Find outboundIpAddresses and add all of them to Firewall IP list.
Don't forget to add your local IP if you want to visit Storage Account using Azure Portal. (Not necessary for deployment from VS).
Explanation:
Can only reproduce the INSUFFICIENT_ACCESS for a Function hosted on Consumption plan.
About this problem, the biggest difference between App service and Consumption plan is how they host function files.
For an App service plan, function files we publish or create on portal are stored on some Azure server. Adding firewall settings to Storage Account used by AzureWebJobsDashboard(store function logs in tables) and AzureWebJobsStorage(store function host locks in container), has no influence on function deployment.
While for Consumption plan, function files are stored on the Storage Account specified by WEBSITE_CONTENTAZUREFILECONNECTIONSTRING. When we publish from VS or create functions on portal, function files are deployed from function site to Storage Account. We met the error as we don't add function app IPs to Storage firewall white list.
As for why we have to create the Storage in a region different from Function app's, based on my tests, function seems not leverage the outbound IPs when they two locate at the same region. See some one on GitHub got the same result.

Related

Can I make an Azure storage account go offline to test my fallback code?

I am developing an Azure application (C#, .NET 6, ASP.NET Core) that uses Azure blob storage as well as table storage.
I have geo-redundancy enabled on my storage account (RA_GRS) so that if my main storage account goes down, a read-only copy will be available in another Azure region.
When reading from blob storage, as far as I understand, I should be able to get it to automatically fall back to the secondary address by setting the GeoRedundantSecondaryUri property like this:
return new BlobServiceClient(
new Uri($"https://{accountName}.blob.core.windows.net/"),
sharedKeyCredential,
new BlobClientOptions
{
GeoRedundantSecondaryUri = new Uri($"https://{accountName}-secondary.blob.core.windows.net/")
});
How can I end-to-end test that I am doing this correctly?
Can I tell the storage account to go offline so that my application should fall back to the read-only backup? I know I can stop an app service using az webapp stop from the command line. Can I do anything similar for storage accounts? Otherwise, how can I test my fallback logic?
(I am asking not only because I want to test the built-in geo-redundancy functionality. I have other failover-related code I want to test.)
EDIT 1: I do not want to use Azure's built-in "initiate account failover" functionality. At least not at this time. I want to test that my application can continue in a kind of "read-only mode" while the primary storage is down.
EDIT 2-3: I have tried to block the primary address ({accountName}.blob.core.windows.net/) using C:\Windows\System32\drivers\etc\hosts. My blob client does not seem to fall back.
If I redirect to IP 0.0.0.0, the blob client throws this: "The requested name is valid, but no data of the requested type was found."
If I redirect to IP 127.0.0.1, the blob client throws this: "No connection could be made because the target machine actively refused it."
If you just want to test the geo-redundancy of your Storage Account, you can initiate an account failover manually from Azure portal & see if your application resumes back the functionality via secondary endpoints.
This documentation might help you out in this scenario: Initiate Azure Storage Account Failover
Note that there will be a certain minimal amount of data loss if you perform a failover, so I'd recommend you to go through the implications of a failover before performing it.

Is it possible to create a custom app for Microsoft Teams that dosn't use a central service provider?

I am working on adding support for our cloud storage solution to MS Teams but there is no central server you can send http messages to and get meaningful relies back from. I have no experience with creating Teams apps so I was hoping someone with Teams apps experience could tell me if this is even possible. At this point I only need my app to work on Windows and OS X.
This is how I would like my Teams App to work:
Each member of the team already has our cloud storage app running locally on their machine which provides access to the files.
Within MS Teams the user adds a file reference to a message via a message extension that would result in a link unfurl creating a card that contains an 'Open' button. The URL in the card would be one generated by our locally running cloud storage app. Other members of the team could then open this file by clicking the 'Open' button. The action of the open button would be to send the URL to our cloud storage app that would then open the local copy of the file on that team members machine.
Is it possible to do something like this within a Teams app? The communication between the Teams app and our cloud storage app would be done over our own protocol.
If it weren't for the fact that all bot communication must be done over https rather than http the local cloud storage app could act as the server.
All the communication in Teams with 3P apps needs to happen over https public endpoint. You could use ngrok to tunnel to local.

Deploying RDL's through Visual Studio to an SSRS instance on Azure

I am running into an issue with my deployment of SSRS RDL report files to my report server instance which is running on Azure.
The error I get when I deploy my reports is:
I have confirmed that I am able to access the report service URL from my web browser(it brings up the FTP-style directory listing of reports) but still receive this error.
This leads me to believe that I am unable to deploy to this server because of a permissions issue however I am unsure if I am able work around this as I tried going on the report server and creating permissions for my username but since the SSRS instance is not on my work server's domain(its hosted on azure) how would I go about creating permissions for my windows workstation user account(using my corporate domain) on the Azure SSRS instance?
This is quite frustrating as everytime I wish to deploy report changes I must manually copy the RDLs to the report server and upload them using the SSRS web interface one by one.
Any help with this issue would be greatly appreciated!
According to documentation you'll have to do the steps listed below. This is called the classic deployment model, but they recommend using the Resource Manager instead:
Quoted from here:
SQL Server Data Tools: Remote: On your local computer, create a Reporting Services project in SQL Server Data Tools that contains Reporting Services reports. Configure the project to connect to the web service URL.
Create a .VHD hard drive that contains reports and then upload and attach the drive.
Create a .VHD hard drive on your local computer that contains your reports.
Create and install a management certificate.
Upload the VHD file to Azure using the Add-AzureVHD cmdlet Create and upload a Windows Server VHD to Azure.
Attach the disk to the virtual machine.

Service Fabric hosted Web API

I've created a simple Stateful Actor and a Web API (self hosted) and deployed it to Azure. It has worked and I can browse the nodes in the Service Fabric Explorer.
Azure gives me a url but when I add /api/values to the end (which works fine locally) it downloads a file called values and I can't open it as it is a binary file.
I want to call the web api from a Xamarin app (ie normal Rest api call) but if I can't call it via a browser I'm a bit stuck.
I would comment this on Stephen's answer, but I lack sufficient reputation.
To add a custom port to the Load Balancer after the service fabric cluster has been created you can (in the newer Azure portal):
Navigate to the load balancer resource for your service fabric cluster.
Under "Settings" find the "Load balancing rules" option.
This will have at least two rules, more if you did setup custom rules during the setup of the cluster.
Add a new rule.
Give it a name
'Port' is the external port you'd like to hit.
'BackendPort' is the port your service is configured to listen on.
The defaults on the other settings work in a pinch.
Note if you have multiple ports to enable, they each need their own rule.
I do know the above worked in my 'hello world' sandbox project.
I'm climbing the service fabric learning curve myself so I can't comment with authority on the other settings.
Have discovered what was missing.
https://azure.microsoft.com/en-us/documentation/articles/service-fabric-cluster-creation-via-portal/
This link here walks through creating the Service Fabric app on Azure and in particular the field "Application input endpoints" needs to have the port you want to use. For the samples, they are mostly port 80 or 8081.
There is supposed to be a way to add these ports afterwards which I tried (and so did a Microsoft support engineer) and it did not seem to work. You are supposed to be able to add these ports to the Load Balancer associated with the Service Fabric App.
I recreated my Service Fabric app, exactly as I did before but this time filled in the ports I want to use in the Node Type section and now I can hit the webapi services I've deployed. This field can be left blank which is what I did first time round and was why I had issues.
Not really related to Service Fabric, it's just how you set up your HTTP response headers in Web API. Recommend tagging this with asp.net or asp.net-web-api for a more thorough answer.
Tutorials and technical resources around Azure Service Fabric Stateless Web API tend to be slightly disjointed, given that the platform and resources are still quite immature.
This Stateless Web API tutorial, at the time of writing, is very effective.
As prerequisite to the tutorial:
Update Visual Studio to the latest version (Extensions and Updates)
Update the Service Fabric SDK to the latest version (Web Platform Installer)
Explicitly specify the EndPoint Port attribute (defined in ServiceManifest.xml) when setting up your Azure Service Fabric Cluster Node Type parameters
Following these steps will successfully allow deployment to both local and remote clusters, and will expose your Web API endpoints for consumption.

Accessing local Azure blob storage via a simple REST 'GET'

I am working with Windows Azure and am just using the Blob Storage. I have setup my Blob Storage to run in its own Solution file with a dummy web role. I run it first on my development machine so the Azure Services start. I have configured the service to use the development shared key and account name.
I am running into an issue when I point my web application (in another solution) to the local Blob Storage service. I can upload a file to the Blob Storage and I can see the records in my local database. Therefore, I have entered the correct settings in the web.config. However, I cannot access the file via a simple Get request. I have verified that the container is public.
The URI I am using is:
http://127.0.0.1:10000/{container-name}/{filename}.{extension}
My code works when I use my production Azure Services, so is there something different about the Development Environment that I am missing? Does the local environment allow REST access?
UPDATE: I recently found this MSDN Article that describes the differences between production and development Storage URIs. I also documented my environment here on my blog.
The uri looks to be slightly incorrect, the format for the development storage uri is:
http://<local-machine-address>:<port>/<account-name>/<resource-path>
Given that the account name is always devstoreaccount1 your uri should be:
http://127.0.0.1:10000/devstoreaccount1/{container-name}/{filename}.{extension}
What kind of response or error code are you getting?
If you are using IE- you can install Fiddler or use the built in developer tools in IE8 to help debug a communication problem.
Sure- the development fabric works in REST!

Resources