My Azure Storage output blob never shows up - azure-blob-storage

I'm trying to build a simple Azure Function that is based on the ImageResizer example, but uses the Microsoft Cognitive Server Computer Vision API to do the resize.
I have working code for the Computer Vision API which I have ported into the Azure function.
It all seems to work ok (no errors) but my output blob never gets saved or shows up in the storage container. Not sure what I'm doing wrong as there are no errors to work with.
My CSX (C# function code) is as follows
using System;
using System.Text;
using System.Net.Http;
using System.Net.Http.Headers;
public static void Run(Stream original, Stream thumb, TraceWriter log)
{
//log.Verbose($"C# Blob trigger function processed: {myBlob}. Dimensions");
string _apiKey = "PutYourComputerVisionApiKeyHere";
string _apiUrlBase = "https://api.projectoxford.ai/vision/v1.0/generateThumbnail";
string width = "100";
string height = "100";
bool smartcropping = true;
using (var httpClient = new HttpClient())
{
//setup HttpClient
httpClient.BaseAddress = new Uri(_apiUrlBase);
httpClient.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", _apiKey);
//setup data object
HttpContent content = new StreamContent(original);
content.Headers.ContentType = new MediaTypeWithQualityHeaderValue("application/octet-stream");
// Request parameters
var uri = $"{_apiUrlBase}?width={width}&height={height}&smartCropping={smartcropping}";
//make request
var response = httpClient.PostAsync(uri, content).Result;
//log result
log.Verbose($"Response: IsSucess={response.IsSuccessStatusCode}, Status={response.ReasonPhrase}");
//read response and write to output stream
thumb = new MemoryStream(response.Content.ReadAsByteArrayAsync().Result);
}
}
My function json is as follows
{
"bindings": [
{
"path": "originals/{name}",
"connection": "thumbnailgenstorage_STORAGE",
"name": "original",
"type": "blobTrigger",
"direction": "in"
},
{
"path": "thumbs/%rand-guid%",
"connection": "thumbnailgenstorage_STORAGE",
"type": "blob",
"name": "thumb",
"direction": "out"
}
],
"disabled": false
}
My Azure storage account is called 'thumbnailgenstorage' and it has two containers named 'originals' and 'thumbs'. The storage account key is KGdcO+hjvARQvSwd2rfmdc+rrAsK0tA5xpE4RVNmXZgExCE+Cyk4q0nSiulDwvRHrSAkYjyjVezwdaeLCIb53g==.
I'm perfectly happy for people to use my keys to help me figure this out! :)

I got this working now. I was writing the output stream incorrectly.
This solution is an Azure function which triggers on the arrival of a blob in an Azure Blob Storage container called 'Originals', then uses the Computer Vision API to smartly resize the image and store in in a different blob container called 'Thumbs'.
Here is the working CSX (c# script):
using System;
using System.Text;
using System.Net.Http;
using System.Net.Http.Headers;
public static void Run(Stream original, Stream thumb, TraceWriter log)
{
int width = 320;
int height = 320;
bool smartCropping = true;
string _apiKey = "PutYourComputerVisionApiKeyHere";
string _apiUrlBase = "https://api.projectoxford.ai/vision/v1.0/generateThumbnail";
using (var httpClient = new HttpClient())
{
httpClient.BaseAddress = new Uri(_apiUrlBase);
httpClient.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", _apiKey);
using (HttpContent content = new StreamContent(original))
{
//get response
content.Headers.ContentType = new MediaTypeWithQualityHeaderValue("application/octet-stream");
var uri = $"{_apiUrlBase}?width={width}&height={height}&smartCropping={smartCropping.ToString()}";
var response = httpClient.PostAsync(uri, content).Result;
var responseBytes = response.Content.ReadAsByteArrayAsync().Result;
//write to output thumb
thumb.Write(responseBytes, 0, responseBytes.Length);
}
}
}
Here is the integration JSON
{
"bindings": [
{
"path": "originals/{name}",
"connection": "thumbnailgenstorage_STORAGE",
"name": "original",
"type": "blobTrigger",
"direction": "in"
},
{
"path": "thumbs/{name}",
"connection": "thumbnailgenstorage_STORAGE",
"name": "thumb",
"type": "blob",
"direction": "out"
}
],
"disabled": false
}

Related

Converting _bn back into PublicKey with Solana

When creating a Solana Transaction I set the feePayer with a public key. When I send this transaction between various endpoints, the feePayer gets converted to something like below:
"feePayer": {
"_bn": {
"negative": 0,
"words": [
37883239,
7439402,
52491380,
11153292,
7903486,
65863299,
41062795,
11403443,
13257012,
320410,
0
],
"length": 10,
"red": null
}
}
My question is, how can I convert this feePayer JSON object back as a PublicKey?
I've tried
new solanaWeb3.PublicKey(feePayer) or
new solanaWeb3.PublicKey(feePayer._bn)
However both don't seem to work, any ideas how to get this json form back into PublicKey: BN<....>?
A BN is a BigNumber. I had a similar case:
"feePayer": {
"_bn": "xcdcasaldkjalsd...."
}
Solution to it:
import BN from "bn.js";
const publicKeyFromBn = (feePayer) => {
const bigNumber = new BN(feePayer._bn, 16)
const decoded = { _bn: bigNumber };
return new PublicKey(decoded);
};
You should play with new BN(feePayer._bn, 16) params to make it work specifically for your case.

How do I create a JWK from PFX certificate?

Background: I'm trying to create a JWK from a PFX file so that I'm able to use the Okta SDK.
The OktaClient expects the private key in the form of a JWK. An example I stole from their unit tests looks like.
{
"p": "{{lots_of_characters}}",
"kty": "RSA",
"q": "{{lots_of_characters}}",
"d": "{{lots_of_characters}}",
"e": "AQAB",
"kid": "3d3062f5-16a4-42b5-837b-19b6ef1a0edc",
"qi": "{{lots_of_characters}}",
"dp": "{{lots_of_characters}}",
"dq": "{{lots_of_characters}}",
"n": "{{lots_of_characters}}"
}
Everything I've tried results in the exception "Something went wrong when creating the signed JWT. Verify your private key." I believe this is because I'm losing the private key part of the cert when I use the IdentityModel convert method (noted below).
var signingCert = new X509Certificate2("{{my_cert}}.pfx", "{{my_passphrase}}");
var privateKey = signingCert.GetRSAPrivateKey();
var rsaSecurityKey = new RsaSecurityKey(privateKey);
// The "HasPrivateKey" flag is suddenly false on the resulting object from this method
var rsaJwk = JsonWebKeyConvert.ConvertFromRSASecurityKey(rsaSecurityKey);
var rsaJwkSerialized = JsonSerializer.Serialize(rsaJwk);
var oktaClientConfig = new OktaClientConfiguration
{
OktaDomain = "{{my_okta_domain}}",
ClientId = {{my_client_id}},
AuthorizationMode = AuthorizationMode.PrivateKey,
PrivateKey = new JsonWebKeyConfiguration(rsaJwkSerialized);,
Scopes = new List<string> {"okta.users.manage"}
};
var oktaClient = new OktaClient(oktaClientConfig);
// This throws when trying to self-sign the JWT using my private key
var oktaUsers = await oktaClient.Users.ListUsers().ToArrayAsync();
Well, after days of trying to figure this out, found it mere hours after finally posting on SO.
It turns out there are flags you set when you create the X509Certificate2 that can tell the cert that it is exportable and this is required for the JsonWebKeyConverter to properly create the JWK.
var signingCert = new X509Certificate2("{{my_cert}}.pfx", "{{my_passphrase}}", X509KeyStorageFlags.Exportable);

AWS Lambda logging through Serilog UDP sink and logstash silently fails

We have a .NET Core 2.1 AWS Lambda that I'm trying to hook into our existing logging system.
I'm trying to log through Serilog using a UDP sink to our logstash instance for ingestion into our ElasticSearch logging database that is hosted on a private VPC. Running locally through a console logs fine, both to the console itself and through UDP into Elastic. However, when it runs as a lambda, it only logs to the console (i.e CloudWatch), and doesn't output anything indicating that anything is wrong. Possibly because UDP is stateless?
NuGet packages and versions:
Serilog 2.7.1
Serilog.Sinks.Udp 5.0.1
Here is the logging code we're using:
public static void Configure(string udpHost, int udpPort, string environment)
{
var udpFormatter = new JsonFormatter(renderMessage: true);
var loggerConfig = new LoggerConfiguration()
.Enrich.FromLogContext()
.MinimumLevel.Information()
.Enrich.WithProperty("applicationName", Assembly.GetExecutingAssembly().GetName().Name)
.Enrich.WithProperty("applicationVersion", Assembly.GetExecutingAssembly().GetName().Version.ToString())
.Enrich.WithProperty("tags", environment);
loggerConfig
.WriteTo.Console(outputTemplate: "[{Level:u}]: {Message}{N---ewLine}{Exception}")
.WriteTo.Udp(udpHost, udpPort, udpFormatter);
var logger = loggerConfig.CreateLogger();
Serilog.Log.Logger = logger;
Serilog.Debugging.SelfLog.Enable(Console.Error);
}
// this is output in the console from the lambda, but doesn't appear in the Database from the lambda
// when run locally, appears in both
Serilog.Log.Logger.Information("Hello from Serilog!");
...
// at end of lambda
Serilog.Log.CloseAndFlush();
And here is our UDP input on logstash:
udp {
port => 5000
tags => [ 'systest', 'serilog-nested' ]
codec => json
}
Does anyone know how I might go about resolving this? Or even just seeing what specifically is wrong so that I can start to find a solution.
Things tried so far include:
Pinging logstash from the lambda - impossible, lambda doesn't have ICMP
Various things to try and get the UDP sink to output errors, as seen above, various attempts at that. Even putting in a completely fake address yields no error though
Adding the lambda to a VPC where I know logging is possible from
Sleeping around at the end of the lambda. SO that the logs have time to go through before the lambda exits
Checking the logstash logs to see if anything looks odd. It doesn't really. And the fact that local runs get through fine makes me think it's not that.
Using UDP directly. It doesn't seem to reach the server. I'm not sure if that's connectivity issues or just UDP itself from a lambda.
Lots of cursing and swearing
In line with my comment above you can create a log subscription and stream to ES like so, I'm aware that this is NodeJS so it's not quite the right answer but you might be able to figure it out from here:
/* eslint-disable */
// Eslint disabled as this is adapted AWS code.
const zlib = require('zlib')
const { Client } = require('#elastic/elasticsearch')
const elasticsearch = new Client({ ES_CLUSTER_DETAILS })
/**
* This is an example function to stream CloudWatch logs to ElasticSearch.
* #param event
* #param context
* #param callback
*/
export default (event, context, callback) => {
context.callbackWaitsForEmptyEventLoop = true
const payload = new Buffer(event.awslogs.data, 'base64')
zlib.gunzip(payload, (err, result) => {
if (err) {
return callback(null, err)
}
const logObject = JSON.parse(result.toString('utf8'))
const elasticsearchBulkData = transform(logObject)
const params = { body: [] }
params.body.push(elasticsearchBulkData)
esClient.bulk(params, (err, resp) => {
if (err) {
callback(null, 'success')
return
}
})
callback(null, 'success')
})
}
function transform(payload) {
if (payload.messageType === 'CONTROL_MESSAGE') {
return null
}
let bulkRequestBody = ''
payload.logEvents.forEach((logEvent) => {
const timestamp = new Date(1 * logEvent.timestamp)
// index name format: cwl-YYYY.MM.DD
const indexName = [
`cwl-${process.env.NODE_ENV}-${timestamp.getUTCFullYear()}`, // year
(`0${timestamp.getUTCMonth() + 1}`).slice(-2), // month
(`0${timestamp.getUTCDate()}`).slice(-2), // day
].join('.')
const source = buildSource(logEvent.message, logEvent.extractedFields)
source['#id'] = logEvent.id
source['#timestamp'] = new Date(1 * logEvent.timestamp).toISOString()
source['#message'] = logEvent.message
source['#owner'] = payload.owner
source['#log_group'] = payload.logGroup
source['#log_stream'] = payload.logStream
const action = { index: {} }
action.index._index = indexName
action.index._type = 'lambdaLogs'
action.index._id = logEvent.id
bulkRequestBody += `${[
JSON.stringify(action),
JSON.stringify(source),
].join('\n')}\n`
})
return bulkRequestBody
}
function buildSource(message, extractedFields) {
if (extractedFields) {
const source = {}
for (const key in extractedFields) {
if (extractedFields.hasOwnProperty(key) && extractedFields[key]) {
const value = extractedFields[key]
if (isNumeric(value)) {
source[key] = 1 * value
continue
}
const jsonSubString = extractJson(value)
if (jsonSubString !== null) {
source[`$${key}`] = JSON.parse(jsonSubString)
}
source[key] = value
}
}
return source
}
const jsonSubString = extractJson(message)
if (jsonSubString !== null) {
return JSON.parse(jsonSubString)
}
return {}
}
function extractJson(message) {
const jsonStart = message.indexOf('{')
if (jsonStart < 0) return null
const jsonSubString = message.substring(jsonStart)
return isValidJson(jsonSubString) ? jsonSubString : null
}
function isValidJson(message) {
try {
JSON.parse(message)
} catch (e) { return false }
return true
}
function isNumeric(n) {
return !isNaN(parseFloat(n)) && isFinite(n)
}
One of my colleagues helped me get most of the way there, and then I managed to figure out the last bit.
I updated Serilog.Sinks.Udp to 6.0.0
I updated the UDP setup code to use the AddressFamily.InterNetwork specifier, which I don't believe was available in 5.0.1.
I removed enriching our log messages with "tags", since I believe it being present on the UDP endpoint somehow caused some kind of clash and I've seen it stop logging without a trace before.
And voila!
Here's the new logging setup code:
loggerConfig
.WriteTo.Udp(udpHost, udpPort, AddressFamily.InterNetwork, udpFormatter)
.WriteTo.Console(outputTemplate: "[{Level:u}]: {Message}{NewLine}{Exception}");

.Audio Timeout Error: NET Core Google Speech to Text Code Causing Timeout

Problem Description
I am a .NET Core developer and I have recently been asked to transcribe mp3 audio files that are approximately 20 minutes long into text. Thus, the file is about 30.5mb. The issue is that speech is sparse in this file, varying anywhere between 2 minutes between a spoken sentence or 4 minutes of length.
I've written a small service based on the google speech documentation that sends 32kb of streaming data to be processed from the file at a time. All was progressing well until I hit this error that I share below as follows:
I have searched via google-fu, google forums, and other sources and I have not encountered documentation on this error. Suffice it to say, I think this is due to the sparsity of spoken words in my file? I am wondering if there is a programmatical centric workaround?
Code
I have used some code that is a slight modification of the google .net sample for 32kb streaming. You can find it here.
public async void Run()
{
var speech = SpeechClient.Create();
var streamingCall = speech.StreamingRecognize();
// Write the initial request with the config.
await streamingCall.WriteAsync(
new StreamingRecognizeRequest()
{
StreamingConfig = new StreamingRecognitionConfig()
{
Config = new RecognitionConfig()
{
Encoding =
RecognitionConfig.Types.AudioEncoding.Flac,
SampleRateHertz = 22050,
LanguageCode = "en",
},
InterimResults = true,
}
});
// Helper Function: Print responses as they arrive.
Task printResponses = Task.Run(async () =>
{
while (await streamingCall.ResponseStream.MoveNext(
default(CancellationToken)))
{
foreach (var result in streamingCall.ResponseStream.Current.Results)
{
//foreach (var alternative in result.Alternatives)
//{
// Console.WriteLine(alternative.Transcript);
//}
if(result.IsFinal)
{
Console.WriteLine(result.Alternatives.ToString());
}
}
}
});
string filePath = "mono_1.flac";
using (FileStream fileStream = new FileStream(filePath, FileMode.Open))
{
//var buffer = new byte[32 * 1024];
var buffer = new byte[64 * 1024]; //Trying 64kb buffer
int bytesRead;
while ((bytesRead = await fileStream.ReadAsync(
buffer, 0, buffer.Length)) > 0)
{
await streamingCall.WriteAsync(
new StreamingRecognizeRequest()
{
AudioContent = Google.Protobuf.ByteString
.CopyFrom(buffer, 0, bytesRead),
});
await Task.Delay(500);
};
}
await streamingCall.WriteCompleteAsync();
await printResponses;
}//End of Run
Attempts
I've increased the stream to 64kb of streaming data to be processed and then I received the following error as can be seen below:
Which, I believe, means the actual api timed out. Which is decidely a step in the wrong direction. Has anybody encountered a problem such as mine with the Google Speech Api when dealing with a audio file with sparse speech? Is there a method in which I can filter the audio down to only spoken words progamatically and then process that? I'm open to suggestions, but my research and attempts have only lead me to further breaking my code.
There is to way for recognize audio in Google Speech API:
normal recognize
long running recognize
Your sample is uses the normal recognize, which has a limit for 15 minutes.
Try to use the long recognize method:
{
var speech = SpeechClient.Create();
var longOperation = speech.LongRunningRecognize( new RecognitionConfig()
{
Encoding = RecognitionConfig.Types.AudioEncoding.Linear16,
SampleRateHertz = 16000,
LanguageCode = "hu",
}, RecognitionAudio.FromFile( filePath ) );
longOperation = longOperation.PollUntilCompleted();
var response = longOperation.Result;
foreach ( var result in response.Results )
{
foreach ( var alternative in result.Alternatives )
{
Console.WriteLine( alternative.Transcript );
}
}
return 0;
}
I hope it helps for you.

How to bookmark message using botframework

I'm using Microsoft bot framework to create a bot and using direct channel to incorporate into web application.During the conversation,I need to bookmark or like the message or response from the bot.
Bot Framework doesn't implement this functionailty in SDKs. You can leverage middleware feature to implement it your self.
General idea is, you can save every activity message pairs with your users. And create a global message handlers for mark or like or detect every message in middleware to check wether user said mark or like. When you can marked the mard tag for the last message you saved previously.
For the sample of Middleware usage, refer to https://github.com/Microsoft/BotBuilder-Samples/tree/master/CSharp/core-Middleware for C# and https://github.com/Microsoft/BotBuilder-Samples/tree/master/Node/capability-middlewareLogging for Node.js.
Any further concern, please feel free to let me know.
Implement the IActivityLogger which is there in the Microsoft.Bot.Builder.History namespace to store/bookmark the IMessageActivity message into a DB or a cache.
IActivityLogger will intercept every message from your dialog which implements IDialog interface.
This is for intercepting every message that is sent and recieved to-fro from the user and the bot.
1) For Dialogs implementing the IDialog interface:
using Microsoft.Bot.Builder.History;
using Microsoft.Bot.Connector;
using MongoDB.Bson;
using MongoDB.Driver;
using System;
using System.Threading.Tasks;
namespace DemoBot.Dialogs
{
public class Logger : IActivityLogger
{
private readonly IMongoClient client;
private readonly IMongoCollection<BsonDocument> collection;
public Logger()
{
client = new MongoClient();
collection = client.GetDatabase("test").GetCollection<BsonDocument>("botLog");
}
public Task LogAsync(IActivity activity)
{
IMessageActivity msgToBeLogged = activity.AsMessageActivity();
BsonDocument objectToBeLogged = new BsonDocument
{
{ "messageText", new BsonString(msgToBeLogged.Text) },
{ "timeStamp", new BsonDateTime(Convert.ToDateTime(msgToBeLogged.Timestamp)) },
{ "recipientId", new BsonString(msgToBeLogged.Recipient.Id) },
{ "fromId", new BsonString(msgToBeLogged.From.Id) },
{ "conversationId", new BsonString(msgToBeLogged.Conversation.Id) },
{ "fromName", new BsonString(msgToBeLogged.From.Name) },
{ "toName", new BsonString(msgToBeLogged.Recipient.Name) },
{ "channnel", new BsonString(msgToBeLogged.ChannelId) },
{ "serviceUrl",new BsonString(msgToBeLogged.ServiceUrl) },
{ "locale", new BsonString(msgToBeLogged.Locale)}
};
return Task.Run(() =>
{
LogIntoDB(objectToBeLogged);
});
}
public void LogIntoDB(BsonDocument activityDetails)
{
collection.InsertOne(activityDetails);
}
}
}
2) For Dialogs that inherit the LuisDialog class, write the logging code in the DispatchToIntentHandler method, as it will the incoming message will pass through that method to resolve into the appropriate handler:
using Microsoft.Bot.Builder.Dialogs;
using Microsoft.Bot.Builder.Luis;
using Microsoft.Bot.Builder.Luis.Models;
using Microsoft.Bot.Connector;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
namespace DemoBot.Dialogs
{
[Serializable]
public class RootDialog : LuisDialog<object>
{
public Task StartAsync(IDialogContext context)
{
return Task.Run(() => { context.Wait(MessageReceived); });
}
protected override Task DispatchToIntentHandler(IDialogContext context, IAwaitable<IMessageActivity> item, IntentRecommendation bestIntent, LuisResult result)
{
IMessageActivity msgToBeLogged = context.MakeMessage();
BsonDocument objectToBeLogged = new BsonDocument
{
{ "messageText", new BsonString(msgToBeLogged.Text) },
{ "timeStamp", new BsonDateTime(Convert.ToDateTime(msgToBeLogged.Timestamp)) },
{ "recipientId", new BsonString(msgToBeLogged.Recipient.Id) },
{ "fromId", new BsonString(msgToBeLogged.From.Id) },
{ "conversationId", new BsonString(msgToBeLogged.Conversation.Id) },
{ "fromName", new BsonString(msgToBeLogged.From.Name) },
{ "toName", new BsonString(msgToBeLogged.Recipient.Name) },
{ "channnel", new BsonString(msgToBeLogged.ChannelId) },
{ "serviceUrl",new BsonString(msgToBeLogged.ServiceUrl) },
{ "locale", new BsonString(msgToBeLogged.Locale)}
};
Task.Run(() =>
{
LogIntoDB(objectToBeLogged);
});
return base.DispatchToIntentHandler(context, item, bestIntent, result);
}
public void LogIntoDB(BsonDocument activityDetails)
{
collection.InsertOne(activityDetails);
}
public Task MessageReceived(IDialogContext context, IAwaitable<IMessageActivity> item)
{
return Task.Run(() =>
{
context.Wait(MessageReceived);
});
}
}
}
For Logging I'm using MongoDB, but you can use SQL Server also if you wish.
And lastly inject the dependencies in your global.asax.cs file using Autofac IoC.
using Autofac;
using DemoBot.Dialogs;
using Microsoft.Bot.Builder.Dialogs;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Http;
using System.Web.Routing;
namespace DemoBot
{
public class WebApiApplication : System.Web.HttpApplication
{
protected void Application_Start()
{
Conversation.UpdateContainer(builder =>
{
builder.RegisterType<Logger>().AsImplementedInterfaces().InstancePerDependency();
});
GlobalConfiguration.Configure(WebApiConfig.Register);
}
}
}

Resources