When creating a Solana Transaction I set the feePayer with a public key. When I send this transaction between various endpoints, the feePayer gets converted to something like below:
"feePayer": {
"_bn": {
"negative": 0,
"words": [
37883239,
7439402,
52491380,
11153292,
7903486,
65863299,
41062795,
11403443,
13257012,
320410,
0
],
"length": 10,
"red": null
}
}
My question is, how can I convert this feePayer JSON object back as a PublicKey?
I've tried
new solanaWeb3.PublicKey(feePayer) or
new solanaWeb3.PublicKey(feePayer._bn)
However both don't seem to work, any ideas how to get this json form back into PublicKey: BN<....>?
A BN is a BigNumber. I had a similar case:
"feePayer": {
"_bn": "xcdcasaldkjalsd...."
}
Solution to it:
import BN from "bn.js";
const publicKeyFromBn = (feePayer) => {
const bigNumber = new BN(feePayer._bn, 16)
const decoded = { _bn: bigNumber };
return new PublicKey(decoded);
};
You should play with new BN(feePayer._bn, 16) params to make it work specifically for your case.
Related
So I'm trying to build a resolver for graphQL which is supposed to return an array of objects. The values for these objects come from a series of TypeORM selecting operations. And when I tried asking for a response in the graphql playground I only got empty arrays, so I started debugging the resolver using console.logs, but the thing is, inside the forEach loops I use the code seems to have the desired result: an array with objects in it. But when I log the same array right before returning it is empty:
#Query(() => [response])
async getTeacherFromSubjectName(
#Arg("subjectName") subjectName: string,
): Promise<response[] | undefined> {
const subject = await Subject.findOne({name: subjectName});
const subjectId = subject?.id;
let responseArray: response[] = [];
const qb = await getConnection()
.createQueryBuilder()
.select("teacher")
.from(Teacher, "teacher")
.where(`teacher.subjectId = ${subjectId}`)
.getMany()
qb.forEach(async (teacher) => {
const qb = await getConnection()
.createQueryBuilder()
.select("lectureTime")
.from(LectureTime, "lectureTime")
.where(`lectureTime.teacherId = ${teacher.id}`)
.getMany()
responseArray.push( {
teacher: teacher.name,
lectures: qb,
} );
console.log(responseArray) // [{ teacher: 'Dirceu', lectures: [ [LectureTime] ] }, { teacher:
'PatrĂcia', lectures: [ [LectureTime], [LectureTime] ] } ]
})
console.log(responseArray) // []
return responseArray;
}
What I get on the console is the following:
link to image
I actually have no idea of what is going on here, you can see in the image that the order of the logs is inverted (log right before return is circled in blue).
I am certain that it is a silly problem and if you guys could point it out for me I would be very thankful.
As xadm said on the comments, you are not awaiting for the promises you are creating inside forEach. In that case you need to map to the promises, so you'll have an array of the Promises and them await them all.
// Changed to map so you get the return values
const promises = qb.map(async (teacher) => {
const qb = await getConnection()
.createQueryBuilder()
.select("lectureTime")
.from(LectureTime, "lectureTime")
.where(`lectureTime.teacherId = ${teacher.id}`)
.getMany()
responseArray.push( {
teacher: teacher.name,
lectures: qb,
} );
console.log(responseArray);
});
// Await for all the promises to be completed, regardless of order.
await Promise.all(promises);
My custom v3 CAF receiver app is successfully playing the first few live & vod assets. After that, it gets into a state were media commands are being queued because "Load is in progress". It is still (successfully) fetching manifests, but MEDIA_STATUS remains "buffering". The log then shows:
[ 4.537s] [cast.receiver.MediaManager] Load is in progress, media command is being queued.
[ 5.893s] [cast.receiver.MediaManager] Buffering state changed, isPlayerBuffering: true old time: 0 current time: 0
[ 5.897s] [cast.receiver.MediaManager] Sending broadcast status message
CastContext Core event: {"type":"MEDIA_STATUS","mediaStatus":{"mediaSessionId":1,"playbackRate":1,"playerState":"BUFFERING","currentTime":0,"supportedMediaCommands":12303,"volume":{"level":1,"muted":false},"currentItemId":1,"repeatMode":"REPEAT_OFF","liveSeekableRange":{"start":0,"end":20.000999927520752,"isMovingWindow":true,"isLiveDone":false}}}
CastContext MEDIA_STATUS event: {"type":"MEDIA_STATUS","mediaStatus":{"mediaSessionId":1,"playbackRate":1,"playerState":"BUFFERING","currentTime":0,"supportedMediaCommands":12303,"volume":{"level":1,"muted":false},"currentItemId":1,"repeatMode":"REPEAT_OFF","liveSeekableRange":{"start":0,"end":20.000999927520752,"isMovingWindow":true,"isLiveDone":false}}}
Fetch finished loading: GET "(manifest url)".
No errors are shown.
Even after closing and restarting the cast session, the issue remains. The cast device itself has to be rebooted to resolve it. It looks like data is kept between sessions.
It could be important to note that the cast receiver app is not published yet. It is hosted on a local network.
My questions are:
What could be the cause of this stuck behavior?
Is there any session data kept between session?
How to fully reset the cast receiver app, without the necessity to restart the cast device.
The receiver app itself is very basic. Other than license wrapping it resembles the vanilla example app:
const { cast } = window;
const TAG = "CastContext";
class CastStore {
static instance = null;
error = observable.box();
framerate = observable.box();
static getInstance() {
if (!CastStore.instance) {
CastStore.instance = new CastStore();
}
return CastStore.instance;
}
get debugLog() {
return this.framerate.get();
}
get errorLog() {
return this.error.get();
}
init() {
const context = cast.framework.CastReceiverContext.getInstance();
const playerManager = context.getPlayerManager();
playerManager.addEventListener(
cast.framework.events.category.CORE,
event => {
console.log(TAG, "Core event: " + JSON.stringify(event));
}
);
playerManager.addEventListener(
cast.framework.events.EventType.MEDIA_STATUS,
event => {
console.log(TAG, "MEDIA_STATUS event: " + JSON.stringify(event));
}
);
playerManager.addEventListener(
cast.framework.events.EventType.BITRATE_CHANGED,
event => {
console.log(TAG, "BITRATE_CHANGED event: " + JSON.stringify(event));
runInAction(() => {
this.framerate.set(`bitrate: ${event.totalBitrate}`);
});
}
);
playerManager.addEventListener(
cast.framework.events.EventType.ERROR,
event => {
console.log(TAG, "ERROR event: " + JSON.stringify(event));
runInAction(() => {
this.error.set(`Error detailedErrorCode: ${event.detailedErrorCode}`);
});
}
);
// intercept the LOAD request to be able to read in a contentId and get data.
this.loadHandler = new LoadHandler();
playerManager.setMessageInterceptor(
cast.framework.messages.MessageType.LOAD,
loadRequestData => {
this.framerate.set(null);
this.error.set(null);
console.log(TAG, "LOAD message: " + JSON.stringify(loadRequestData));
if (!loadRequestData.media) {
const error = new cast.framework.messages.ErrorData(
cast.framework.messages.ErrorType.LOAD_CANCELLED
);
error.reason = cast.framework.messages.ErrorReason.INVALID_PARAM;
return error;
}
if (!loadRequestData.media.entity) {
// Copy the value from contentId for legacy reasons if needed
loadRequestData.media.entity = loadRequestData.media.contentId;
}
// notify loadMedia
this.loadHandler.onLoadMedia(loadRequestData, playerManager);
return loadRequestData;
}
);
const playbackConfig = new cast.framework.PlaybackConfig();
// intercept license requests & responses
playbackConfig.licenseRequestHandler = requestInfo => {
const challenge = requestInfo.content;
const { castToken } = this.loadHandler;
const wrappedRequest = DrmLicenseHelper.wrapLicenseRequest(
challenge,
castToken
);
requestInfo.content = wrappedRequest;
return requestInfo;
};
playbackConfig.licenseHandler = license => {
const unwrappedLicense = DrmLicenseHelper.unwrapLicenseResponse(license);
return unwrappedLicense;
};
// Duration of buffered media in seconds to start/resume playback after auto-paused due to buffering; default is 10.
playbackConfig.autoResumeDuration = 4;
// Minimum number of buffered segments to start/resume playback.
playbackConfig.initialBandwidth = 1200000;
context.start({
touchScreenOptimizedApp: true,
playbackConfig: playbackConfig,
supportedCommands: cast.framework.messages.Command.ALL_BASIC_MEDIA
});
}
}
The LoadHandler optionally adds a proxy (I'm using a cors-anywhere proxy to remove the origin header), and stores the castToken for licenseRequests:
class LoadHandler {
CORS_USE_PROXY = true;
CORS_PROXY = "http://192.168.0.127:8003";
castToken = null;
onLoadMedia(loadRequestData, playerManager) {
if (!loadRequestData) {
return;
}
const { media } = loadRequestData;
// disable cors for local testing
if (this.CORS_USE_PROXY) {
media.contentId = `${this.CORS_PROXY}/${media.contentId}`;
}
const { customData } = media;
if (customData) {
const { licenseUrl, castToken } = customData;
// install cast token
this.castToken = castToken;
// handle license URL
if (licenseUrl) {
const playbackConfig = playerManager.getPlaybackConfig();
playbackConfig.licenseUrl = licenseUrl;
const { contentType } = loadRequestData.media;
// Dash: "application/dash+xml"
playbackConfig.protectionSystem = cast.framework.ContentProtection.WIDEVINE;
// disable cors for local testing
if (this.CORS_USE_PROXY) {
playbackConfig.licenseUrl = `${this.CORS_PROXY}/${licenseUrl}`;
}
}
}
}
}
The DrmHelper wraps the license request to add the castToken and base64-encodes the whole. The license response is base64-decoded and unwrapped lateron:
export default class DrmLicenseHelper {
static wrapLicenseRequest(challenge, castToken) {
const wrapped = {};
wrapped.AuthToken = castToken;
wrapped.Payload = fromByteArray(new Uint8Array(challenge));
const wrappedJson = JSON.stringify(wrapped);
const wrappedLicenseRequest = fromByteArray(
new TextEncoder().encode(wrappedJson)
);
return wrappedLicenseRequest;
}
static unwrapLicenseResponse(license) {
try {
const responseString = String.fromCharCode.apply(String, license);
const responseJson = JSON.parse(responseString);
const rawLicenseBase64 = responseJson.license;
const decodedLicense = toByteArray(rawLicenseBase64);
return decodedLicense;
} catch (e) {
return license;
}
}
}
The handler for cast.framework.messages.MessageType.LOAD should always return:
the (possibly modified) loadRequestData, or
a promise for the (possibly modified) loadRequestData
null to discard the load request (I'm not 100% sure this works for load requests)
If you do not do this, the load request stays in the queue and any new request is queued after the initial one.
In your handler, you return an error if !loadRequestData.media, which will get you into that state. Another possibility is an exception in the load request handler, which will also get you in that state.
I guess we have a different approach and send everything possible through sendMessage, when we loading stuff we create a new cast.framework.messages.LoadRequestData() which we load with playerManager.load(loadRequest).
But I guess that you might be testing this on an integrated Chromecast, we see this problems as well!?
I suggest that you do one or more
Enable gzip compression on all responses!!!
Stop playback playerManager.stop() (maybe in the interseptor?)
Change how the licenseUrl is set
How we set licenseUrl
playerManager.setMediaPlaybackInfoHandler((loadRequestData, playbackConfig) => {
playbackConfig.licenseUrl = loadRequestData.customData.licenseUrl;
return playbackConfig;
}
);
We have a .NET Core 2.1 AWS Lambda that I'm trying to hook into our existing logging system.
I'm trying to log through Serilog using a UDP sink to our logstash instance for ingestion into our ElasticSearch logging database that is hosted on a private VPC. Running locally through a console logs fine, both to the console itself and through UDP into Elastic. However, when it runs as a lambda, it only logs to the console (i.e CloudWatch), and doesn't output anything indicating that anything is wrong. Possibly because UDP is stateless?
NuGet packages and versions:
Serilog 2.7.1
Serilog.Sinks.Udp 5.0.1
Here is the logging code we're using:
public static void Configure(string udpHost, int udpPort, string environment)
{
var udpFormatter = new JsonFormatter(renderMessage: true);
var loggerConfig = new LoggerConfiguration()
.Enrich.FromLogContext()
.MinimumLevel.Information()
.Enrich.WithProperty("applicationName", Assembly.GetExecutingAssembly().GetName().Name)
.Enrich.WithProperty("applicationVersion", Assembly.GetExecutingAssembly().GetName().Version.ToString())
.Enrich.WithProperty("tags", environment);
loggerConfig
.WriteTo.Console(outputTemplate: "[{Level:u}]: {Message}{N---ewLine}{Exception}")
.WriteTo.Udp(udpHost, udpPort, udpFormatter);
var logger = loggerConfig.CreateLogger();
Serilog.Log.Logger = logger;
Serilog.Debugging.SelfLog.Enable(Console.Error);
}
// this is output in the console from the lambda, but doesn't appear in the Database from the lambda
// when run locally, appears in both
Serilog.Log.Logger.Information("Hello from Serilog!");
...
// at end of lambda
Serilog.Log.CloseAndFlush();
And here is our UDP input on logstash:
udp {
port => 5000
tags => [ 'systest', 'serilog-nested' ]
codec => json
}
Does anyone know how I might go about resolving this? Or even just seeing what specifically is wrong so that I can start to find a solution.
Things tried so far include:
Pinging logstash from the lambda - impossible, lambda doesn't have ICMP
Various things to try and get the UDP sink to output errors, as seen above, various attempts at that. Even putting in a completely fake address yields no error though
Adding the lambda to a VPC where I know logging is possible from
Sleeping around at the end of the lambda. SO that the logs have time to go through before the lambda exits
Checking the logstash logs to see if anything looks odd. It doesn't really. And the fact that local runs get through fine makes me think it's not that.
Using UDP directly. It doesn't seem to reach the server. I'm not sure if that's connectivity issues or just UDP itself from a lambda.
Lots of cursing and swearing
In line with my comment above you can create a log subscription and stream to ES like so, I'm aware that this is NodeJS so it's not quite the right answer but you might be able to figure it out from here:
/* eslint-disable */
// Eslint disabled as this is adapted AWS code.
const zlib = require('zlib')
const { Client } = require('#elastic/elasticsearch')
const elasticsearch = new Client({ ES_CLUSTER_DETAILS })
/**
* This is an example function to stream CloudWatch logs to ElasticSearch.
* #param event
* #param context
* #param callback
*/
export default (event, context, callback) => {
context.callbackWaitsForEmptyEventLoop = true
const payload = new Buffer(event.awslogs.data, 'base64')
zlib.gunzip(payload, (err, result) => {
if (err) {
return callback(null, err)
}
const logObject = JSON.parse(result.toString('utf8'))
const elasticsearchBulkData = transform(logObject)
const params = { body: [] }
params.body.push(elasticsearchBulkData)
esClient.bulk(params, (err, resp) => {
if (err) {
callback(null, 'success')
return
}
})
callback(null, 'success')
})
}
function transform(payload) {
if (payload.messageType === 'CONTROL_MESSAGE') {
return null
}
let bulkRequestBody = ''
payload.logEvents.forEach((logEvent) => {
const timestamp = new Date(1 * logEvent.timestamp)
// index name format: cwl-YYYY.MM.DD
const indexName = [
`cwl-${process.env.NODE_ENV}-${timestamp.getUTCFullYear()}`, // year
(`0${timestamp.getUTCMonth() + 1}`).slice(-2), // month
(`0${timestamp.getUTCDate()}`).slice(-2), // day
].join('.')
const source = buildSource(logEvent.message, logEvent.extractedFields)
source['#id'] = logEvent.id
source['#timestamp'] = new Date(1 * logEvent.timestamp).toISOString()
source['#message'] = logEvent.message
source['#owner'] = payload.owner
source['#log_group'] = payload.logGroup
source['#log_stream'] = payload.logStream
const action = { index: {} }
action.index._index = indexName
action.index._type = 'lambdaLogs'
action.index._id = logEvent.id
bulkRequestBody += `${[
JSON.stringify(action),
JSON.stringify(source),
].join('\n')}\n`
})
return bulkRequestBody
}
function buildSource(message, extractedFields) {
if (extractedFields) {
const source = {}
for (const key in extractedFields) {
if (extractedFields.hasOwnProperty(key) && extractedFields[key]) {
const value = extractedFields[key]
if (isNumeric(value)) {
source[key] = 1 * value
continue
}
const jsonSubString = extractJson(value)
if (jsonSubString !== null) {
source[`$${key}`] = JSON.parse(jsonSubString)
}
source[key] = value
}
}
return source
}
const jsonSubString = extractJson(message)
if (jsonSubString !== null) {
return JSON.parse(jsonSubString)
}
return {}
}
function extractJson(message) {
const jsonStart = message.indexOf('{')
if (jsonStart < 0) return null
const jsonSubString = message.substring(jsonStart)
return isValidJson(jsonSubString) ? jsonSubString : null
}
function isValidJson(message) {
try {
JSON.parse(message)
} catch (e) { return false }
return true
}
function isNumeric(n) {
return !isNaN(parseFloat(n)) && isFinite(n)
}
One of my colleagues helped me get most of the way there, and then I managed to figure out the last bit.
I updated Serilog.Sinks.Udp to 6.0.0
I updated the UDP setup code to use the AddressFamily.InterNetwork specifier, which I don't believe was available in 5.0.1.
I removed enriching our log messages with "tags", since I believe it being present on the UDP endpoint somehow caused some kind of clash and I've seen it stop logging without a trace before.
And voila!
Here's the new logging setup code:
loggerConfig
.WriteTo.Udp(udpHost, udpPort, AddressFamily.InterNetwork, udpFormatter)
.WriteTo.Console(outputTemplate: "[{Level:u}]: {Message}{NewLine}{Exception}");
In react-navigation, I wanted to use TabRouter but on this.props.navigation.goBack() I wanted it to go to the previous tab.
Does anyone have any idea how to go to previous tab, I was hoping it would as simple as setting backBehavior: 'previousTab'.
I have hacked solution here, but its a bad hack as I had to modify the lib files:
I was only able to accomplish this by setting backBehavior to initialRoute, and then on my TabRouter adding a custom getStateForAction like this:
const defaultGetStateForAction = HubNavigator.router.getStateForAction;
HubNavigator.router.getStateForAction = function(action, state) {
switch (action.type) {
case NavigationActions.INIT: {
if (!this.TAB_HISTORY) this.TAB_HISTORY = [];
this.TAB_HISTORY.length = 0;
this.TAB_HISTORY.push({ index:ROUTE_INDEX[INITIAL_ROUTE_NAME], params:undefined }); // i dont think INIT ever has params - C:\Users\Mercurius\Pictures\Screenshot - 1, 2017 10.47 AM.png
break;
}
case NavigationActions.NAVIGATE: {
const { routeName } = action;
this.TAB_HISTORY.push({ index:ROUTE_INDEX[routeName], params:action.params });
break;
}
case NavigationActions.BACK: {
if (this.TAB_HISTORY.length === 1) {
BackHandler.exitApp();
return null;
} else {
const current = this.TAB_HISTORY.pop();
const previous = this.TAB_HISTORY[this.TAB_HISTORY.length - 1];
const default_ = defaultGetStateForAction(action, state, ()=>{
console.log('returning previous index of:', previous.index);
return previous.index
});
default_.index = previous.index;
default_.routes[previous.index].params = previous.params;
return default_;
}
}
}
return defaultGetStateForAction(action, state);
}
What I do is, on NavigationActions.BACK I modify the returned object index to have the previous index, which I hold in the array this.TAB_HISTORY.
However when I start the app, switch from initial tab to tab 2, then from tab 2 back to initial tab... pressing "back" would do nothing this is because activeTabIndex is always set to initialRouteIndex here - https://github.com/react-community/react-navigation/blob/5e075e1c31d5e6192f2532a815b1737fa27ed65b/src/routers/TabRouter.js#L138
So you see in my fix above I pass a third argument to defaultGetStateForAction which returns the index, but I had to modify react-navigation/src/routers/TabRouter.js for this, which is not what I want to do.
Does anyone have any idea how to go to previous tab, I was hoping it would as simple as setting backBehavior: 'previousTab'.
Here is my HubNavigator in case you want to see that:
const ROUTE_INDEX = { Players: 0, Girls: 1, Customers: 2, Assets: 3 };
const INITIAL_ROUTE_NAME = 'Players';
const HubNavigator = TabNavigator(
{
Players: { screen:ScreenPlayers },
Girls: { screen:ScreenGirls },
Customers: { screen:ScreenCustomers },
Assets: { screen:ScreenAssets }
},
{
initialRouteName: INITIAL_ROUTE_NAME,
backBehavior: 'initialRoute',
cardStyle: styles.card,
order: Object.entries(ROUTE_INDEX).sort(([,indexA], [,indexB]) => indexA - indexB).map(([routeName]) => routeName),
}
)
I'm trying to build a simple Azure Function that is based on the ImageResizer example, but uses the Microsoft Cognitive Server Computer Vision API to do the resize.
I have working code for the Computer Vision API which I have ported into the Azure function.
It all seems to work ok (no errors) but my output blob never gets saved or shows up in the storage container. Not sure what I'm doing wrong as there are no errors to work with.
My CSX (C# function code) is as follows
using System;
using System.Text;
using System.Net.Http;
using System.Net.Http.Headers;
public static void Run(Stream original, Stream thumb, TraceWriter log)
{
//log.Verbose($"C# Blob trigger function processed: {myBlob}. Dimensions");
string _apiKey = "PutYourComputerVisionApiKeyHere";
string _apiUrlBase = "https://api.projectoxford.ai/vision/v1.0/generateThumbnail";
string width = "100";
string height = "100";
bool smartcropping = true;
using (var httpClient = new HttpClient())
{
//setup HttpClient
httpClient.BaseAddress = new Uri(_apiUrlBase);
httpClient.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", _apiKey);
//setup data object
HttpContent content = new StreamContent(original);
content.Headers.ContentType = new MediaTypeWithQualityHeaderValue("application/octet-stream");
// Request parameters
var uri = $"{_apiUrlBase}?width={width}&height={height}&smartCropping={smartcropping}";
//make request
var response = httpClient.PostAsync(uri, content).Result;
//log result
log.Verbose($"Response: IsSucess={response.IsSuccessStatusCode}, Status={response.ReasonPhrase}");
//read response and write to output stream
thumb = new MemoryStream(response.Content.ReadAsByteArrayAsync().Result);
}
}
My function json is as follows
{
"bindings": [
{
"path": "originals/{name}",
"connection": "thumbnailgenstorage_STORAGE",
"name": "original",
"type": "blobTrigger",
"direction": "in"
},
{
"path": "thumbs/%rand-guid%",
"connection": "thumbnailgenstorage_STORAGE",
"type": "blob",
"name": "thumb",
"direction": "out"
}
],
"disabled": false
}
My Azure storage account is called 'thumbnailgenstorage' and it has two containers named 'originals' and 'thumbs'. The storage account key is KGdcO+hjvARQvSwd2rfmdc+rrAsK0tA5xpE4RVNmXZgExCE+Cyk4q0nSiulDwvRHrSAkYjyjVezwdaeLCIb53g==.
I'm perfectly happy for people to use my keys to help me figure this out! :)
I got this working now. I was writing the output stream incorrectly.
This solution is an Azure function which triggers on the arrival of a blob in an Azure Blob Storage container called 'Originals', then uses the Computer Vision API to smartly resize the image and store in in a different blob container called 'Thumbs'.
Here is the working CSX (c# script):
using System;
using System.Text;
using System.Net.Http;
using System.Net.Http.Headers;
public static void Run(Stream original, Stream thumb, TraceWriter log)
{
int width = 320;
int height = 320;
bool smartCropping = true;
string _apiKey = "PutYourComputerVisionApiKeyHere";
string _apiUrlBase = "https://api.projectoxford.ai/vision/v1.0/generateThumbnail";
using (var httpClient = new HttpClient())
{
httpClient.BaseAddress = new Uri(_apiUrlBase);
httpClient.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", _apiKey);
using (HttpContent content = new StreamContent(original))
{
//get response
content.Headers.ContentType = new MediaTypeWithQualityHeaderValue("application/octet-stream");
var uri = $"{_apiUrlBase}?width={width}&height={height}&smartCropping={smartCropping.ToString()}";
var response = httpClient.PostAsync(uri, content).Result;
var responseBytes = response.Content.ReadAsByteArrayAsync().Result;
//write to output thumb
thumb.Write(responseBytes, 0, responseBytes.Length);
}
}
}
Here is the integration JSON
{
"bindings": [
{
"path": "originals/{name}",
"connection": "thumbnailgenstorage_STORAGE",
"name": "original",
"type": "blobTrigger",
"direction": "in"
},
{
"path": "thumbs/{name}",
"connection": "thumbnailgenstorage_STORAGE",
"name": "thumb",
"type": "blob",
"direction": "out"
}
],
"disabled": false
}