Can't see Azure Internal Load Balancer on Azure Load Balancer Panel - azure-cloud-services

I am working on existing project which is created by different developer/vendor. Per documentation, Azure Internal Load Balancer is configured for Azure Cloud Service (Protocol Gateway Instance). However I can not see any Load Balancer with that name under Azure Load Balancer panel. If I look at the .cscfg file of Cloud Service, I can see Load Balancer Configuration as well, as below.
<NetworkConfiguration>
<!--VNet and subnet must be classic virtual network resources, not Azure Resource Manager resources.-->
<VirtualNetworkSite name="Group resource-group-name vnet-name" />
<AddressAssignments>
<InstanceAddress roleName="cloud-service-instance-name">
<Subnets>
<Subnet name="subnet-name" />
</Subnets>
</InstanceAddress>
</AddressAssignments>
<!--VNet settings-->
<LoadBalancers>
<LoadBalancer name="load-balancer-name">
<FrontendIPConfiguration type="private" subnet="subnet-name" staticVirtualNetworkIPAddress="cloud-service-instance-private-vnet-ip" />
</LoadBalancer>
</LoadBalancers>
</NetworkConfiguration>
Can anyone help me to understand this configuration? Cloud Service is in a production environment and working perfectly fine.
Is there any different Internal Load Balancer configuration for Classic VNet?

Since Azure cloud service is used for classic deployment models and Azure load balancer is used for Azure Resource Manager deployment models, you could not see it via Azure load balancer panel.
Make sure you understand deployment models and tools - Azure Resource Manager vs. classic deployment You can also Learn how to create an Internet facing load balancer using Azure Resource Manager.
The Resource Manager and classic deployment models represent two
different ways of deploying and managing your Azure solutions. You
work with them through two different API sets, and the deployed
resources can contain important differences. The two models are not
compatible with each other. This article describes those differences.
To simplify the deployment and management of resources, Microsoft
recommends that you use Resource Manager for all new resources. If
possible, Microsoft recommends that you redeploy existing resources
through Resource Manager.

Related

Connect to Oracle OnPremises using Azure Data Factory - via S2sVPN

Is it possible to connect to on-premises Oracle DB using Azure Data Factory via a Site to Site VPN? Or am I obliged to use a Self-Hosted Integration Runtime?
Thanks
If your data store is located inside an on-premises network, an Azure virtual network, or Amazon Virtual Private Cloud, you need to configure a self-hosted integration runtime to connect to it.
If your data store is a managed cloud data service, you can use the Azure Integration Runtime. If the access is restricted to IPs that are approved in the firewall rules, you can add Azure Integration Runtime IPs to the allow list.
You can also use the managed virtual network integration runtime feature in Azure Data Factory to access the on-premises network without installing and configuring a self-hosted integration runtime.
For more information about the network security mechanisms and options supported by Data Factory, see Data access strategies.
The integration runtime provides a built-in Oracle driver. Therefore, you don't need to manually install a driver when you copy data from and to Oracle.
Refer - https://learn.microsoft.com/en-us/azure/data-factory/connector-oracle?tabs=data-factory

Service Fabric hosted Web API

I've created a simple Stateful Actor and a Web API (self hosted) and deployed it to Azure. It has worked and I can browse the nodes in the Service Fabric Explorer.
Azure gives me a url but when I add /api/values to the end (which works fine locally) it downloads a file called values and I can't open it as it is a binary file.
I want to call the web api from a Xamarin app (ie normal Rest api call) but if I can't call it via a browser I'm a bit stuck.
I would comment this on Stephen's answer, but I lack sufficient reputation.
To add a custom port to the Load Balancer after the service fabric cluster has been created you can (in the newer Azure portal):
Navigate to the load balancer resource for your service fabric cluster.
Under "Settings" find the "Load balancing rules" option.
This will have at least two rules, more if you did setup custom rules during the setup of the cluster.
Add a new rule.
Give it a name
'Port' is the external port you'd like to hit.
'BackendPort' is the port your service is configured to listen on.
The defaults on the other settings work in a pinch.
Note if you have multiple ports to enable, they each need their own rule.
I do know the above worked in my 'hello world' sandbox project.
I'm climbing the service fabric learning curve myself so I can't comment with authority on the other settings.
Have discovered what was missing.
https://azure.microsoft.com/en-us/documentation/articles/service-fabric-cluster-creation-via-portal/
This link here walks through creating the Service Fabric app on Azure and in particular the field "Application input endpoints" needs to have the port you want to use. For the samples, they are mostly port 80 or 8081.
There is supposed to be a way to add these ports afterwards which I tried (and so did a Microsoft support engineer) and it did not seem to work. You are supposed to be able to add these ports to the Load Balancer associated with the Service Fabric App.
I recreated my Service Fabric app, exactly as I did before but this time filled in the ports I want to use in the Node Type section and now I can hit the webapi services I've deployed. This field can be left blank which is what I did first time round and was why I had issues.
Not really related to Service Fabric, it's just how you set up your HTTP response headers in Web API. Recommend tagging this with asp.net or asp.net-web-api for a more thorough answer.
Tutorials and technical resources around Azure Service Fabric Stateless Web API tend to be slightly disjointed, given that the platform and resources are still quite immature.
This Stateless Web API tutorial, at the time of writing, is very effective.
As prerequisite to the tutorial:
Update Visual Studio to the latest version (Extensions and Updates)
Update the Service Fabric SDK to the latest version (Web Platform Installer)
Explicitly specify the EndPoint Port attribute (defined in ServiceManifest.xml) when setting up your Azure Service Fabric Cluster Node Type parameters
Following these steps will successfully allow deployment to both local and remote clusters, and will expose your Web API endpoints for consumption.

What's the possibility to slow down the web application after migrated to azure cloud service?

We have migrated the existing web application to azure cloud service, using azure database and azure redis cache provider.
We visit the same page in azure platform and local machine
In azure platform it takes 2 seconds to execute this page.
In local machine,it only takes 500 ms.
Both environment are using the same azure database and the same azure redis.
The location of azure database ,azure redis ,cloud service are all in west europe.
We use large cloud service (4 cores, 8GB memroy)
We also execute the page in cloud service machine by using remote desktop, try to prove that it's not due to the network, and it's still very slow.
Does anybody has experience about this, why it's so slow to execute the same page in azure (by using the same database and the same cache provider)?

Configuring an Azure Website with application warmup

I have an Azure Website developed for which I would like to reduce the initial loading time. On a regular ASP.NET site I would configure the Application Initialization IIS module, but with Azure Websites direct IIS configuration is not possible.
The website is running in reserved mode if that makes any difference.
Actually, Application Initialization module is installed by default for Azure Web Apps. You can directly configure it from either your web.config file or through apphost.config XDT. Just stick something like below in a web.config in the root of your web app.
<system.webServer>
<applicationInitialization
doAppInitAfterRestart="true"
skipManagedModules="true">
<add initializationPage="/default.aspx" hostName="myhost"/>
</applicationInitialization>
</system.webServer>
Application Initialization is not supported with Windows Azure Websites. because it is a native module and Windows Azure Websites does not allow configuring native modules via web.config.
Alsom the content for Windows Azure Websites are physically located at a centralized location and from there they loaded and executed to webservers. While shared instance gets a slice of host VM, versus reserved instance get a full host VM to run your web applications, in both cases the website application is coming from same centralized located so it does not matter if you have reserve instance to get Application Initialization working.
Application Initialization is necessary for your application and your websites is running in reserve mode, you can use Azure VM or Windows Azure Web Role to have it working.
Currently there's "Always On" setting for Azure Websites which does pretty much the same thing.

Create CRM Organizations on Load Balancing network

I'm trying to understand how to create CRM Organization on Load Balancing network.
I've three web servers (Web01, Web 02, Web03); three application servers (App01, App02, App03) and a SQL Server (SQL01). I already have Load Balancer setup and there is already one organizaiton setup by someone on all web servers. This organization is Internet Facing. Now I want to create one more Organization on same set of Web Servers. Can anyone please help me understand how to setup new Organization on Load Balancer in this scenario?
An important point to know is that there is a difference between a CRM deployment, and the organizations deployed.
The deployment consist of one or more CRM Server, and a SQL Server that can be clustered. A different server can be used as ReportServer and an Exchange Server can be configured as the email router.
Once the servers are deployed, one or many organizations can be configured using the CRM Deployment manager, on one of the front-end CRM Server. All the front-end (load balanced) server can be used to access any organizations configured in the deployment, based on the current user's credentials. When configuring the organization, the Report Server that should be used is assigned to the organization.
Also, if you are using IFD (Internet Facing Deployment), every organizations should have their own DNS entry (orgId.theprefixchoosen.mydomain.intra) that is pointing to the load balancer's IP address, so that CRM can route the users to the appropriate organization. More can be found in Microsoft's documents on configuring IFD.
Creating the new org is no different that normal. You would do this via Deployment Manager on 1 of the servers. This will basically create the needed SQL DB and associated ocnfig entries. To get the IFD portion working, you will also potentially need to add DNS entries to route traffic for that new orgname to your servers.

Resources