Visual Studio diagnostics configuration error in event hub set up - visual-studio

I am trying to set up streaming from an Azure VM scale set to an event hub via Diagnostics configuration.
I have my public config which includes the SinksConfig as follows (I have omitted the rest of the config for the sake of brevity):
{
"WadCfg": {
"DiagnosticMonitorConfiguration": {
*** config for performance counters and ETW ***
"SinksConfig": {
"Sink": [
{
"name": "eventhub",
"EventHub": {
"Url": "sb://myhub.servicebus.windows.net/mycompanyapplication",
"SharedAccessKeyName": "RootManageSharedAccessKey"
}
}
]
}
},
"StorageAccount": "<storageaccount>"
}
and the private config:
{
"storageAccountName": "<storageaccountname>",
"storageAccountKey": "<storageaccountkey>",
"storageAccountEndPoint": "https://core.windows.net",
"EventHub": {
"Url": "sb://myhub.servicebus.windows.net/mycompanyapplication",
"SharedAccessKeyName": "RootManageSharedAccessKey",
"SharedAccessKey": "<sharedaccesskey>"
}
}
However, nothing is being received by the event hub. I can see in the storage account logs that the Diagnostics extension is running:
but in the substatus there are many errors around the SAS key and the event hub:
When I check back in the Visual Studio Diagnostics configuration on the Scale set I see this error:
I have checked the naming convention on the SharedAccessKeyName (which is the default provided when the event hub was set up) know that the SAS key works as I wrote a console app to send messages to the same event hub with the same credentials and it worked fine.
So there is obviously a problem with the authentication to the event hub as it can't read the access key from the config file. However, I can't see any other way of providing it.
Am I missing something obvious here in my config?

Turns out the problem was quite simple, I had grabbed the URL from the connection string in the portal which was
sb://myhub.servicebus.windows.net/mycompanyapplication
when it should have been
https://myhub.servicebus.windows.net/mycompanyapplication
Now the data is flowing freely into the event hub.
However, the diagnostics config in VS still shows the warning about not being able to read the SAS key, which now looks like a "red herring" that ended up costing me a lot of time :(

Related

where can i turn off request trace logs in app insights

I have app insights set up in my .net core app and in the logs I see these request traces:
Request starting HTTP/1.1 GET
Request finished in 1.0838ms 404
These are being logged quite a bit and are pushing up the billing. How can I switch these off?
If you want to disable telemetry conditionally and dynamically, you can resolve the TelemetryConfiguration instance with an ASP.NET Core dependency injection container anywhere in your code and set the DisableTelemetry flag on it.
public void ConfigureServices(IServiceCollection services)
{
services.AddApplicationInsightsTelemetry();
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env, TelemetryConfiguration configuration)
{
configuration.DisableTelemetry = true;
...
}
Result:
The preceding code sample prevents the sending of telemetry to Application Insights. It doesn't prevent any automatic collection modules from collecting telemetry. If you want to remove a particular auto collection module, see remove the telemetry module.
Quite likely these are coming from ILogger logs. You can use a filter to not collect ILogger logs based on category. You'll need to find the category and apply a filter, so that these logs don't get sent to ApplicationInsights.
The following completely disables ILogger logs from all category. You'd want something more custom, and you can adjust the filter accordingly.
builder.AddFilter<ApplicationInsightsLoggerProvider>("", LogLevel.None);
https://learn.microsoft.com/azure/azure-monitor/app/ilogger#create-filter-rules-in-configuration-with-appsettingsjson
I fixed this issue by applying the filter for ApplicationInsights in appsettings.json. This will log categories that start with "Microsoft.AspNetCore" at level "LogLevel.Warning" and higher only.
"Logging": {
"ApplicationInsights": {
"LogLevel": {
"Default": "Information",`
"Microsoft.AspNetCore": "Warning"
}
}
},
See: https://learn.microsoft.com/en-us/azure/azure-monitor/app/ilogger#create-filter-rules-in-configuration-with-appsettingsjson

Unable to save configuration of Teams connector

Past two days, I have been trying to configure MS Teams connector following this tutorial:
https://learn.microsoft.com/en-us/learn/modules/msteams-webhooks-connectors/7-exercise-o365-connectors
I configured the connector via Connectors Developer Dashboard.
Then I tried both, cloning and reconfiguring this sample:
https://github.com/OfficeDev/TrainingContent/tree/master/Teams/60%20Webhooks%20O365%20Connectors/Demos/03-o365-connector
and also bootstrapping the project via yo teams, following the tutorial step-by-step.
After building the project and serving it via ngrok, I can sideload the connector into Teams (tried both, desktop app and web), it successfully brings me to configuration page, but never allows me to save the connector settings. I always get this error:
Unable to save “My First Teams Connector” connector configuration. Please try again.
I adapted the code and debugged it to see, that the call to /api/connector/connect succeeds and saveEvent.notifySuccess() is called.
Then I noticed, that right after saving the connector via browser, this error appears in the console:
{
"seq": 1597590187271,
"timestamp": 1597593891957,
"flightSettings": {
"Name": "ConnectorFrontEndSettings",
"AriaSDKToken": "d127f72a3abd41c9b9dd94faca947689-d58285e6-3a68-4cab-a458-37b9d9761d35-7033",
"SPAEnabled": true,
"ClassificationFilterEnabled": true,
"ClientRoutingEnabled": true,
"EnableYammerGroupOption": true,
"EnableFadeMessage": false,
"EnableDomainBasedOwaConnectorList": false,
"EnableDomainBasedTeamsConnectorList": false,
"DevPortalSPAEnabled": true,
"ShowHomeNavigationButtonOnConfigurationPage": false,
"DisableConnectToO365InlineDeleteFeedbackPage": true
},
"status": 500,
"clientType": "SkypeSpaces",
"connectorType": "f39fe17c-6452-4879-b692-a93d73684348",
"name": "handleMessageError"
}
Any idea what could be incorrectly configured, or whether there is a place to check for more descriptive error? Log of desktop Teams was not helpful either.
ConnectorID: f39fe17c-6452-4879-b692-a93d73684348
So, what really helped me in the end for that particular tutorial:
run gulp ngrok-serve
configure the connector following the tutorial (with valid domains; excluding protocol) on Connectors Developer Dashboard
extract the content of packaged connector
adapt extracted manifest.json with newly created connector ID (both occurrences)
repackage the connector as zip
upload to Teams & configure it
I filled the valid domains field with xxxxxxxx.ngrok.io which is the domain of my configuration page.
Be careful, if you update an existing connector, apparently these changes needs time to be taken into account. To be sure, you can create a fresh new connector.

Azure IoT Edge module logs - Task upload logs failed because of error

I was following the experimental features of Built-in log pulls
https://github.com/Azure/iotedge/blob/master/doc/built-in-logs-pull.md
When I am trying to upload logs using the following payload from the azure portal(using Direct Method under each module)
PAYLOAD:
{
"schemaVersion": "1.0",
"sasUrl":"https://veeaiotcentralstorage.blob.core.windows.net/iotedgeruntimelogs/iotedgeruntimelogs.txt?sv=2019-02-02&st=2020-08-08T08%3A56%3A00Z&se=2020-08-14T08%3A56%3A00Z&sr=b&sp=rw&sig=xyz",
"items": [
{
"id": "zigbee_template-arm64v8",
"filter": {
"tail": 10
}
}
],
"encoding": "none",
"contentType": "text"
}
I am getting the error mentioned below after checking the task status
ERROR:
{"status":200,"payload":{"status":"Failed",
"message":"Task upload logs failed because of error Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.",
"correlationId":"b85002d8-d8f9-49d5-851d-9123a8d7d740"}}
Please let me know where I am having an issue
Digging into the code some more, I noticed the UploadLogs implementation doesn't create a container, but rather a folder structure within the container that you supply. As far as I can tell, the casing restriction applies when creating a blob container, but there are no such restriction on creating folders within the container.
Please check the SAS URL that you supplied, or something on the storage end. Double check that your SAS URL is generated for a pre-existing blob container.

Logging to an external service with aws-flow

I'm using aws-flow to interact with Amazon's Simple Workflow Server and I want to get logging set up to go to an external source (PaperTrail).
I've set my $logger to use PaperTrail and I pass this into the client I use to start the execution;
client = Aws::SWF::Client.new(region: 'eu-west-1', logger: $logger)
client.start_workflow_execution({
domain: domain,
workflow_id: ...,
workflow_type: {
name: "...",
version: ...
},
task_list: {
name: "..."
}
})
This successfully logs that the client has started, but no action from inside a Workflow or Activity gets logged.
From reading the documentation and this SO answer it seems like you need to specify a logger when creating new Activities, but I can't see how to do that.
The main workflow uses activity_client to select the Activitiy it needs, and the activity being called looks like;
class MyActivity
extend AWS::Flow::Activities
activity :my_activity do {
default_task_list: '...',
version: ...,
default_task_schedule_to_start_timeout: 60,
default_task_start_to_close_timeout: 60,
exponential_retry: { maximum_attempts: 2 }
}
I can't see anywhere with this setup that you can add a logger to.
Any help would be greatly appreciated
So it turns out aws-flow doesn't support logging.
There is some chat about it in issues and there is a PR which appears to fix things.
For my needs I just forked the project (which at the time of writing hasn't been updated in 2 years) and made the relevant changes for me.

How to speed up Azure deployment from Visual Studio 2010

I have Visual Studio 2010 solution with an Azure Service and an ASP.NET MVC 3 solution that serves as a Web Role for the Azure service. No other roles attached to the service other than that.
Every deployment to the Azure staging (or production, for that matter) environment takes up to 20 minutes to complete, form the moment I click publish on Visual Studio until all instances (2) are started.
As you can imagine this makes it a PITA to publish often, or to quick-fix some bugs. Is there a way to speed the process up? Would it be faster to upload the package to de Blob storage and upgrade from there? How would I go about achieving that?
I feel on-line docs on Azure leave a lot to be desired. Particularly when it comes to troubleshooting by the way.
Thanks.
One idea for reducing the need (and frequency) for redeploying is to move static content into blob storage, external to the package. For instance, move your css and javascript to blob storage, along with images. Once this is done, you'd only have to recompile / redeploy for .NET code changes. You can upload updated css, at any time, to blob storage. If you want to test this in staging first, you could always have a staging vs. production container name for your static content and store that container name in a config setting.
This doesn't change the deployment time when you do need to redeploy, but at least you can reduce how often you go through that process...
You should enable Web Deploy in your Azure project. It works this way :
1/ Create a RDP account (don't forget, you need to upload a certificate with its private key so that Azure can decipher the password). That is hidden in the Deploy Dialog Box for your Azure deployment project.
2/ Enable Web Deployment - same place
Once you've published the app that way, right-click in the web application (not the azure deployment project) and select Publish. The pop-up has everything defined except the password, enter that as well and you'll upload your changes to Azure in a matter of seconds.
CAVEAT : this is meant for single-instance web apps, definitely not the way to go for a production upgrade strategy, and the Blob storage answer already mentioned is the best option in that case.
Pierre
My solution to this problem is only to push a new package when I am changing code in the RoleEntryPoint or with the Service Definition. In Azure 1.3 you now have the ability to use Remote Desktop Connection. Using RDC, I will compile my code locally and use copy/paste to place it on the Azure server in the appropriate directory. Once the production code is running correctly, I can then push the fully tested version to staging and then do a VIP swap. This limits the number of times I actually have to deploy a package.
You actually have quite a long window in which you can keep modifying your code in Azure before you have to publish a new package. The new package is only really needed for those cases where Azure has to shutdown/restart your role instance.
It's a nice idea to try uploading your project to blob storage first, but unfortunately this is what Visual Studio is doing for you behind the scene anyway. As has been pointed out elsewhere, most of the time in doing the deploy is not the upload itself, but the stopping and starting of all of your update domains.
If you're just running this site in a development environment, then the only way I know to speed it up is to run just one instance. If this is the live environment, then... sorry, I think you're out of luck.
So that I don't have to deploy to the cloud to test minor changes, what I've found works quite well is to engineer the site so that it works when running in local IIS just like any other MVC site.
The biggest barrier to this working are settings that you have in the cloud config. The way we get around this is to make a copy of all of the settings in your cloud config and put them in your web.config in the appSettings. Then rather than using RoleEnvironment.GetConfigurationSettingValue() create a wrapper class that you call instead. This wrapper class checks RoleEnvironment.IsAvailable to see if it is running in the Azure fabric, if it is, it calls the usual config function above, if not, it calls WebConfigurationManager.AppSettings[].
There are a few other things that you'll want to do around getting the config setting change events which hopefully you can figure out from the code below:
public class SmartConfigurationManager
{
private static bool _addConfigChangeEvents;
private static string _configName;
private static Func<string, bool> _configSetter;
public static bool AddConfigChangeEvents
{
get { return _addConfigChangeEvents; }
set
{
_addConfigChangeEvents = value;
if (value)
{
RoleEnvironment.Changing += RoleEnvironmentChanging;
}
else
{
RoleEnvironment.Changing -= RoleEnvironmentChanging;
}
}
}
public static string Setting(string configName)
{
if (RoleEnvironment.IsAvailable)
{
return RoleEnvironment.GetConfigurationSettingValue(configName);
}
return WebConfigurationManager.AppSettings[configName];
}
public static Action<string, Func<string, bool>> GetConfigurationSettingPublisher()
{
if (RoleEnvironment.IsAvailable)
{
return AzureSettingsGet;
}
return WebAppSettingsGet;
}
public static void WebAppSettingsGet(string configName, Func<string, bool> configSetter)
{
configSetter(WebConfigurationManager.AppSettings[configName]);
}
public static void AzureSettingsGet(string configName, Func<string, bool> configSetter)
{
// We have to store these to be used in the RoleEnvironment Changed handler
_configName = configName;
_configSetter = configSetter;
// Provide the configSetter with the initial value
configSetter(RoleEnvironment.GetConfigurationSettingValue(configName));
if (AddConfigChangeEvents)
{
RoleEnvironment.Changed += RoleEnvironmentChanged;
}
}
private static void RoleEnvironmentChanged(object anotherSender, RoleEnvironmentChangedEventArgs arg)
{
if ((arg.Changes.OfType<RoleEnvironmentConfigurationSettingChange>().Any(change => change.ConfigurationSettingName == _configName)))
{
if ((_configSetter(RoleEnvironment.GetConfigurationSettingValue(_configName))))
{
RoleEnvironment.RequestRecycle();
}
}
}
private static void RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e)
{
// If a configuration setting is changing
if ((e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange)))
{
// Set e.Cancel to true to restart this role instance
e.Cancel = true;
}
}
}
The uploading itself takes a bit more than a minute most of the time. It's the starting up of the instances that take up most of the time.
What you can do is to deploy your fixes to staging first (note that it costs money so don't let it be there for too long). Swapping from staging to production only takes a couple of seconds. So while your application's still running you can upload the patched version, let your testers test it on staging and when they give the go then simply swap it to production.
I haven't tested your possible alternative approach by first uploading to blob storage first. But I think that's overhead as it doesn't speed up starting up the instances.

Resources