Whats the right BlobStorageService Configuration format? - botframework

When creating a Microsoft Bot Framework 4 project - the Startup.cs has the following code which can be uncommented.
const string StorageConfigurationId = "<NAME OR ID>";
var blobConfig = botConfig.FindServiceByNameOrId(StorageConfigurationId);
if (!(blobConfig is BlobStorageService blobStorageConfig))
{
throw new InvalidOperationException($"The .bot file does not contain an blob storage with name '{StorageConfigurationId}'.");
}
This code handles a way to configure an Azure Storage Account via Json Configuration.
However the project lacks an example on what the config Json looks like for the "is BlobStorageService" to work.
I have done various tries and searched for examples but cannot make it work.
Has anyone got the nailed?

Got it working using this json...
{
"type": "blob", //Must be 'blob'
"name": "<NAME OF CONFIG - MUST BE UNIQUE (CAN BE ID)>",
"connectionString": "<COPY FROM AZURE DASHBOARD>",
"container": "<NAME OF CONTAINER IN STORAGE>"
}

Related

Reference Error {{variable}} is not defined at global in Rest Client in Visual Studio Code

I try to use Rest Client VS Code Extension from Huachao Mao. I created a new profile in my workspace settings
vs-code-workspace.code-workspace
{
"settings": {
"git.ignoreLimitWarning": true,
"rest-client.enableTelemetry": false,
"rest-client.environmentVariables": {
"xyz": {
"host": "http://localhost:8080/",
}
}
}
}
and I'm trying to send the following request
request.http
GET {{restBasePath}}/api/employee
but I'm getting
ReferenceError: host is not defined at global.<anonymous> (c:\Users\dev\request.http:3:20)
at N$ (c:\Users\user\.vscode\extensions\anweber.vscode-httpyac-5.8.3\dist\extension.js:191:43412)
at zCe (c:\Users\user\.vscode\extensions\anweber.vscode-httpyac-5.8.3\dist\extension.js:191:43961)
at c:\Users\user\.vscode\extensions\anweber.vscode-httpyac-5.8.3\dist\extension.js:191:47073
at Jh (c:\Users\user\.vscode\extensions\anweber.vscode-httpyac-5.8.3\dist\extension.js:190:30551)
at Object.V3t [as action] (c:\Users\user\.vscode\extensions\anweber.vscode-httpyac-5.8.3\dist\extension.js:191:47038)
at I2e.<anonymous> (c:\Users\user\.vscode\extensions\anweber.vscode-httpyac-5.8.3\dist\extension.js:1:5703)
at Generator.next (<anonymous>)
at s (c:\Users\user\.vscode\extensions\anweber.vscode-httpyac-5.8.3\dist\extensi...
after I switched environment to xyz and tried to send the request. How to get rid of the error?
You mixed vscode-rest-client with vscode-httpyac. httpyac is an alternative to vscode-rest-client developed by me (see httpyac.github.io). To configure my extension use setting httpyac.environmentVariables. To use vscode-rest-client use codelens Send Request. I would be happy for you to try httpyac, but I would recommend using only one of the two extensions and disabling the other.

CDK/CloudFormation Batch Setup NotStabilized Error

I'm trying to set up a simple Batch Compute Environment using a LaunchTemplate, so that I can specify a larger-than-default volume size:
const templateName = 'my-template'
const jobLaunchTemplate = new ec2.LaunchTemplate(stack, 'Template', {
launchTemplateName: templateName,
blockDevices: [ ..vol config .. ]
})
const computeEnv = new batch.CfnComputeEnvironment(stack, 'CompEnvironment', {
type: 'managed',
computeResources: {
instanceRole: jobRole.roleName,
instanceTypes: [
InstanceType.of(InstanceClass.C4, InstanceSize.LARGE).toString()
],
maxvCpus: 64,
minvCpus: 0,
desiredvCpus: 0,
subnets: vpc.publicSubnets.map(sn => sn.subnetId),
securityGroupIds: [vpc.vpcDefaultSecurityGroup],
type: 'EC2',
launchTemplate: {
launchTemplateName: templateName,
}
},
})
They both initialize fine when not linked, however as soon as the launchTemplate block is added to the compute environment, I get the following error:
Error: Resource handler returned message: "Resource of type 'AWS::Batch::ComputeEnvironment' with identifier 'compute-env-arn' did not stabilize." (RequestToken: token, HandlerErrorCode: NotStabilized)
Any suggestions are greatly appreciated, thanks in advance!
For anyone running into this - check the resource that is being created in the AWS Console - i.e go to aws.amazon.com and refresh the page over and over until you see it created by CF. This gave me a different error message regarding the instance profile not existing (A bit more helpful than the terminal error...)
A simple CfnInstanceProfile did the trick:
new iam.CfnInstanceProfile(stack, "batchInstanceProfile", {
instanceProfileName: jobRole.roleName,
roles: [jobRole.roleName],
});
I faced similar error.
But in my case cdk had created subnetGroups list in cdk.context.json and was trying to use the same in the CfnComputeEnvironment definition.
The problem was; I was using the default vpc and had manually modified few subnets. and cdk.context.json was not updated.
Solved by deleting the cdk.context.json
This file was recreated with correct values in next synth.
Tip for others facing similar problem:
Don't just rely on the error message; watch closely the Cloud-formation Script that's generated from CDK for the resource.

AWS Lambda Code in S3 Bucket not updating

I am using cloudformation to create my lambda function with the code in a S3Bucket with versioning enabled.
"MYLAMBDA": {
"Type": "AWS::Lambda::Function",
"Properties": {
"FunctionName": {
"Fn::Sub": "My-Lambda-${StageName}"
},
"Code": {
"S3Bucket": {
"Fn::Sub": "${S3BucketName}"
},
"S3Key": {
"Fn::Sub": "${artifact}.zip"
},
"S3ObjectVersion": "1e8Oasedk6sDZu6y01tioj8X._tAl3N"
},
"Handler": "streams.lambda_handler",
"Runtime": "python3.6",
"Timeout": "300",
"MemorySize": "512",
"Role": {
"Fn::GetAtt": [
"LambdaExecutionRole",
"Arn"
]
}
}
}
The lambda function gets created successfully. When i copy a new artifact zip file to the s3bucket, a new version of the file gets created with the new version "S3ObjectVersion" string. But the lambda function code is still using the older version.
The documentation of aws cloudformation clearly says the following
To update a Lambda function whose source code is in an Amazon S3
bucket, you must trigger an update by updating the S3Bucket, S3Key, or
S3ObjectVersion property. Updating the source code alone doesn't
update the function.
Is there an additional trigger event, i need to create to get the code updated?
In case anyone is running into this similar issue, I have figured out a way in my case. I use Terraform + Jenkins to create my lambda functions through s3 bucket. In the beginning, I can create the functions but it won't update once it created. I verified my zip files in s3 is updated. It took me some time to figure out that I need do one of following two changes.
solution 1: Giving a new object key when load the new zip file. In my terraform I add the git commit id as part of the s3 key.
resource "aws_s3_bucket_object" "lambda-abc-package" {
bucket = "${aws_s3_bucket.abc-bucket.id}"
key = "${var.lambda_ecs_task_runner_bucket_key}_${var.git_commit_id}.zip"
source = "../${var.lambda_ecs_task_runner_bucket_key}.zip"
}
solution 2: add source_code_hash in lambda part.
resource "aws_lambda_function" "abc-ecs-task-runner" {
s3_bucket = "${var.bucket_name}"
s3_key = "${aws_s3_bucket_object.lambda-ecstaskrunner-package.key}"
function_name = "abc-ecs-task-runner"
role = "${aws_iam_role.AbcEcsTaskRunnerRole.arn}"
handler = "index.handler"
memory_size = "128"
runtime = "nodejs6.10"
timeout = "300"
source_code_hash = "${base64sha256(file("../${var.lambda_ecs_task_runner_bucket_key}.zip"))}"
So do either one should work. Also when checking lambda code, refresh the URL from the browser won't work. Need go back Functions and open that function again.
Hope this helps.
I also faced the same issue , my code was in Archive.zip in S3 bucket , when I uploaded a new Archive.zip , lambda was not responding according to new code .
Solution was to again paste the link of S3 location of Archive.zip in lambda's function code section and Save it again.
How I figured out lambda was not taking new code?
Go to your lambda function --> Actions --> Export Function --> Download Deployment Package and check if the code is actually the code that you've recently uploaded to S3 .
You have to update the S3ObjectVersion value to the new version ID in your CloudFormation template itself.
Then you have to update your Cloudformation stack with the new template.
You can do this either on the Cloudformation console or via the AWS CLI.
From AWS CLI you can do an update-function-code call like this post mentions : https://nono.ma/update-aws-lambda-function-code

Web API add openid scope to auth url for swagger/swashbuckle UI

We have a asp.net web api application which uses swagger/swashbuckle for it's api documentation. The api is secured by azure AD using oauth/openid-connect. The configuration for swagger is done in code:
var oauthParams = new Dictionary<string, string>
{
{ "resource", "https://blahblahblah/someId" }
};
GlobalConfiguration.Configuration
.EnableSwagger(c =>
{
c.SingleApiVersion(Version, Name);
c.UseFullTypeNameInSchemaIds();
c.OAuth2("oauth2")
.Description("OAuth2 Implicit Grant")
.Flow("implicit")
.AuthorizationUrl(
"https://login.microsoftonline.com/te/ourtenant/ourcustompolicy/oauth2/authorize")
.TokenUrl(
"https://login.microsoftonline.com/te/ourtenant/ourcustompolicy/oauth2/token");
c.OperationFilter<AssignOAuth2SecurityRequirements>();
})
.EnableSwaggerUi(c =>
{
c.EnableOAuth2Support(_applicationId, null, "http://localhost:49919/swagger/ui/o2c-html", "Swagger", " ", oauthParams);
c.BooleanValues(new[] { "0", "1" });
c.DisableValidator();
c.DocExpansion(DocExpansion.List);
});
When swashbuckle constructs the auth url for login, it automatically adds:
&scope=
However I need this to be:
&scope=openid
I have tried adding this:
var oauthParams = new Dictionary<string, string>
{
{ "resource", "https://blahblahblah/someId" },
{ "scope", "openid" }
};
But this then adds:
&scope=&someotherparam=someothervalue&scope=openid
Any ideas how to add
&scope=openid
To the auth url that swashbuckle constructs?
Many thanks
So, found out what the issue was, the offending code can be found here:
https://github.com/swagger-api/swagger-ui/blob/2.x/dist/lib/swagger-oauth.js
These js files are from a git submodule that references the old version of the UI.
I can see on lines 154-158 we have this code:
url += '&redirect_uri=' + encodeURIComponent(redirectUrl);
url += '&realm=' + encodeURIComponent(realm);
url += '&client_id=' + encodeURIComponent(clientId);
url += '&scope=' + encodeURIComponent(scopes.join(scopeSeparator));
url += '&state=' + encodeURIComponent(state);
It basically adds scopes regardless of whether there are scopes or not. This means you cannot add scopes in the additionalQueryParams dictionary that gets sent into EnableOAuth2Support as you will get a url that contains 2 scope query params i.e.
&scope=&otherparam=otherparamvalue&scope=openid
A simple length check around the scopes would fix it.
I ended up removing swashbuckle from the web api project and added a different nuget package called swagger-net, found here:
https://www.nuget.org/packages/Swagger-Net/
This is actively maintained and it resolved the issue and uses a newer version of the swagger ui. The configuration remained exactly the same, the only thing you need to change is your reply url which is now:
http://your-url/swagger/ui/oauth2-redirect-html

Vagrant Meta Data is Corrupt

Hope someone can help out here. I'm trying to version self hosted vagrant boxes, so doing this without using Vagrant Cloud.
I've created the following meta data file:
{
"description": "How about this",
"name": "Graphite",
"versions": [
{
"version": "1.8",
"providers": [
{
"name": "virtualbox",
"url": "http://desktopenvironments/Graphite/Graphite_1.8.box"
}
]
}
]
}
This is taken directly from the vagrant (somewhat lacking) documentation found at: http://docs.vagrantup.com/v2/boxes/format.html.
When running a vagrant add (taking the box file that contains this file directly from disk) I get:
The metadata associated with the box 'graphite' appears corrupted.
This is most often caused by a disk issue or system crash. Please
remove the box, re-add it, and try again.
Any assistance as to why this is happening would be greatly appreciated.
Was generating my metadata file from a c# app I wrote, using UTF8 for text encoding. This is not enough. You need to use UTF8 without BOM.
Once the Byte Order Mark was removed it all works 100s.
var settings = new JsonSerializerSettings() { ContractResolver = new LowercaseContractResolver() };
string json = JsonConvert.SerializeObject(metadata, Formatting.None, settings);
var utf8WithoutBom = new System.Text.UTF8Encoding(false);
using (var sink = new StreamWriter(outputFilePath, false, utf8WithoutBom))
{
sink.Write(json);
}

Resources