Cannot fire Bigcommerce webhooks - ruby

so far I've managed to create two webhooks by using their official gem (https://github.com/bigcommerce/bigcommerce-api-ruby) with the following events:
store/order/statusUpdated
store/app/uninstalled
The destination URL is a localhost tunnel managed by ngrok (the https) version.
status_update_hook = Bigcommerce::Webhook.create(connection: connection, headers: { is_active: true }, scope: 'store/order/statusUpdated', destination: 'https://myapp.ngrok.io/bigcommerce/notifications')
uninstall_hook = Bigcommerce::Webhook.create(connection: connection, headers: { is_active: true }, scope: 'store/app/uninstalled', destination: 'https://myapp.ngrok.io/bigcommerce/notifications')
The webhooks seems to be active and correctly created as I can retrieve and list them.
Bigcommerce::Webhook.all(connection:connection)
I manually created an order in my store dashboard but no matter to which state or how many states I change it, no notification is fired. Am I missing something?

The exception that I'm seeing in the logs is:
ExceptionMessage: true is not a valid header value
The "is-active" flag should be sent as part of the request body--your headers, if you choose to include them, would be an arbitrary key value pair that you can check at runtime to verify the hook's origin.
Here's an example request body:
{
"scope": "store/order/*",
"headers": {
"X-Custom-Auth-Header": "{secret_auth_password}"
},
"destination": "https://app.example.com/orders",
"is_active": true
}
Hope this helps!

Related

How do I create a HttpOrigin for Cloudfront to use a Lambda function url?

Trying to setup a Cloudfront behaviour to use a Lambda function url with code like this:
this.distribution = new Distribution(this, id + "Distro", {
comment: id + "Distro",
defaultBehavior: {
origin: new S3Origin(s3Site),
viewerProtocolPolicy: ViewerProtocolPolicy.REDIRECT_TO_HTTPS,
},
additionalBehaviors: {
[`api-prd-v2/*`]: {
compress: true,
originRequestPolicy: originRequestPolicy,
origin: new HttpOrigin(functionUrl.url, {
protocolPolicy: OriginProtocolPolicy.HTTPS_ONLY,
originSslProtocols: [OriginSslPolicy.TLS_V1_2],
}),
allowedMethods: AllowedMethods.ALLOW_ALL,
viewerProtocolPolicy: ViewerProtocolPolicy.HTTPS_ONLY,
cachePolicy: apiCachePolicy,
},
The functionUrl object is created in a different stack and passed in to the cloudformation stack, the definition looks like:
this.functionUrl = new FunctionUrl(this, 'LambdaApiUrl', {
function: this.lambdaFunction,
authType: FunctionUrlAuthType.NONE,
cors: {
allowedOrigins: ["*"],
allowedMethods: [HttpMethod.GET, HttpMethod.POST],
allowCredentials: true,
maxAge: Duration.minutes(1)
}
});
The above code fails because "The parameter origin name cannot contain a colon".
Presumably, this is because functionUrl.url evaluates to something like https://xxx.lambda-url.ap-southeast-2.on.aws/ (note the https://) whereas the HttpOrigin parameter should just be the domain name like xxx.lambda-url.ap-southeast-2.on.aws.
I can't just write code to hack the url up though (i.e. functionUrl.url.replace("https://", "")) because when my code executes, the value or the url property is a token like ${Token[TOKEN.350]}.
The function url is working properly: if I hard-code the HttpOrigin to the function url's value (i.e. like xxx.lambda-url.ap-southeast-2.on.aws) - it works fine.
How do I use CDK code to setup the reference from Cloudfront to the function url?
I'm using aws-cdk version 2.21.1.
There is an open issue to add support: https://github.com/aws/aws-cdk/issues/20090
Use CloudFormation Intrinsic Functions to parse the url string:
cdk.Fn.select(2, cdk.Fn.split('/', functionUrl.url));
// -> 7w3ryzihloepxxxxxxxapzpagi0ojzwo.lambda-url.us-east-1.on.aws

Youtube upload API : Videos automatically private

I'm using the Youtube API to upload videos.
The videos are uploaded public, but after the upload, they automatically changed to Private.
I've done the approval from the developer console, so the youtube.upload scope has been approved already. I'm only using the scope to upload and nothing else.
Any ideas ? I tried to contact the support, but no luck :(
Edit:
My application has been verified:
And I have the correct scope:
And the code:
let oauth = Youtube.authenticate({
type: "oauth",
client_id:
"client",
client_secret: "secret",
redirect_url: "redirect",
});
oauth.setCredentials(token);
Youtube.videos.insert(
{
resource: {
snippet: {
title: `blabla`,
description: `blabla`,
},
status: {
privacyStatus: "public",
madeForKids: true,
},
},
part: "snippet,status",
media: {
body: fs.createReadStream(`blabla.mp4`),
},
},
(err, data) => {
console.log("Done.");
}
);
Videos: insert
When your application is still in the testing phase and has not been verified by google yet all videos uploaded will be uploaded as private.
Once you have set your application to published and gone though the verification process you will then be abled to upload public videos.
YouTube API Services - Audit and Quota Extension Form
YouTube API Services Terms of Service
July 28, 2020
update
Audit is not verification, these are two different things. You need to apply for an audit. YouTube API Services - Audit and Quota Extension Form

Using SocketIo Manager with a default URL

My goal is to add a token in the socketio reconnection from the client (works fine on the first connection, but the query is null on the reconnection, if the server restarted while the client stayed on).
The documentation indicates I need to use the Manager to customize the reconnection behavior (and add a query parameter).
However, I'm getting trouble finding how to use this Manager: I can't find a way to connect to the server.
What I was using without Manager (works fine):
this.socket = io({
query: {
token: 'abc',
}
});
Version with the Manager:
const manager = new Manager(window.location, {
hostname: "localhost",
path: "/socket.io",
port: "8080",
query: {
auth: "123"
}
});
So I tried many approaches (nothing, '', 'http://localhost:8080', 'http://localhost:8080/socket.io', adding those lines to the options:
hostname: "localhost",
path: "/socket.io",
port: "8080" in the options,
But I couldn't connect.
The documentation indicates the default URL is:
url (String) (defaults to window.location)
For some reasons, using window.location as URL refreshes the page infinitely, no matter if I enter it as URL in the io() creator or in the new Manager.
I am using socket.io-client 3.0.3.
Could someone explain me what I'm doing wrong ?
Thanks
Updating to 3.0.4 solved the initial problem, which was to be able to send the token in the initial query.
I also found this code in the doc, which solves the problem:
this.socket.on('reconnect_attempt', () => {
socket.io.opts.query = {
token: 'fgh'
}
});
However, it doesn't solve the problem of the Manager that just doesn't work. I feel like it should be removed from the doc. I illustrated the problem in this repo:
https://github.com/Yvanovitch/socket.io/blob/master/examples/chat/public/main.js

Azure Form Recognizer training not finding data

I'm trying to train a Form Recognizer using the browser API console (https://eastus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api/operations/TrainCustomModel/console). I've uploaded traning images to a container and created an SAS. The browser API console generate following HTTP request:
POST https://eastus.api.cognitive.microsoft.com/formrecognizer/v1.0-preview/custom/train?source=https://pythonimages.blob.core.windows.net/?sv=2019-02-02&ss=bfqt&srt=sco&sp=rl&se=2020-01-22T00:23:33Z&st=2020-01-21T16:23:33Z&spr=https&sig=••••••••••••••••••••••••••••••••&prefix=images HTTP/1.1
Host: eastus.api.cognitive.microsoft.com
Content-Type: application/json
Ocp-Apim-Subscription-Key: ••••••••••••••••••••••••••••••••
{
"source": "string",
"sourceFilter": {
"prefix": "string",
"includeSubFolders": true
}
}
However, the answer I get back is
Transfer-Encoding: chunked
x-envoy-upstream-service-time: 4
apim-request-id: 5ad37aa2-e251-4b61-98ae-023930b47d27
Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
x-content-type-options: nosniff
Date: Tue, 21 Jan 2020 16:25:03 GMT
Content-Type: application/json; charset=utf-8
{
"error": {
"code": "1004",
"message": "Dataset path must be relative to local input mount path '/input' if local data is referenced."
}
}
I don't understand why it seems to be looking for data locally. I've experimented with the SAS, e.g. including the container name (images) in the blob http address rather than as a query parameter, but no success so far.
I've also tried the Python/REST path (described here: https://learn.microsoft.com/en-gb/azure/cognitive-services/form-recognizer/quickstarts/python-train-extract-v1), which results in a different error:
Response status code: 408
Response body: {'error': {'code': '1011', 'innerError': {'requestId': 'e7f9ef9f-97bc-4b6a-86f3-0b29c9591c87'}, 'message': 'The operation exceeded allowed time limit and was canceled. The common reasons are that the data source is too large or contains unsupported content. Please check that your request conforms to service limits and retry with redacted data source.'}}
For completeness, the code I use is as follows (key/signature *ed out:)
########### Python Form Recognizer Train #############
from requests import post as http_post
# Endpoint URL
base_url = r"https://markusformsrecognizer.cognitiveservices.azure.com/" + "/formrecognizer/v1.0-preview/custom"
source = r"https://pythonimages.blob.core.windows.net/images?sv=2019-02-02&ss=bfqt&srt=sco&sp=rl&se=2020-01-22T15:37:26Z&st=2020-01-22T07:37:26Z&spr=https&sig=*********************************"
headers = {
# Request headers
'Content-Type': 'application/json',
'Ocp-Apim-Subscription-Key': '*********************************'
}
url = base_url + "/train"
body = {"source": source}
try:
resp = http_post(url = url, json = body, headers = headers)
print("Response status code: %d" % resp.status_code)
print("Response body: %s" % resp.json())
except Exception as e:
print(str(e))
For error code 1004 Please follow the below to get the Source path containing the training documents and pass as value to the source key.
{
"source": "string",
"sourceFilter": {
"prefix": "string",
"includeSubFolders": true
}
}
Replace with the Azure Blob storage container's shared access signature (SAS) URL. To retrieve the SAS URL, open the Microsoft Azure Storage Explorer, right-click your container, and select Get shared access signature.
Make sure the Read and List permissions are checked, and click Create.
Then copy the value in the URL section. It should have the form:
https://.blob.core.windows.net/container name?SAS value.
Please use the new Form Recognizer v2.0 release it is an async API and enables training on large data sets and analyzing large documents. https://aka.ms/form-recognizer/api
quick start - https://learn.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/python-train-extract
To get started with Form Recognizer please login to the Azure Portal using this link to create a Form Recognizer resource (for v2.0 (preview) please use West US 2 or West Europe regions).
try removing the string value from prefix property.
{
"source": "string",
"sourceFilter": {
"prefix": "",
"includeSubFolders": true
}
}
The Python Quick Start code for version 2.0 seems to be working, at least I don’t get any errors anymore. I’m now feeling slightly silly that I didn’t try this earlier. The API (web-browser) console, linked from the Quick Start page of the Form Recognizer seems automatically assume I want to use version 1.0 and there’s no way to change that (or perhaps I’ve just overseen something). Hence I assumed I’d been allocated a v1.0 trial and therefore that’s what I used when I tried the Python Quick Start the first time around.
Instead of using just the SAS URI in the "source" of Request parameter on the API POST call, use the complete string of the container followed by the SAS URI token.
For ex:
https://.blob.core.windows.net//

Why isn't fineUploader sending an x-amz-credential property among the request conditions?

My server-side policy signing code is failing on this line:
credentialCondition = conditions[i]["x-amz-credential"];
(Note that this code is taken from the Node example authored by the FineUploader maintainer. I have only changed it by forcing it to use version 4 signing without checking for a version parameter.)
So it's looking for an x-amz-credential parameter in the request body, among the other conditions, but it isn't there. I checked the request in the dev tools and the conditions look like this:
0: {acl: "private"}
1: {bucket: "menu-translator"}
2: {Content-Type: "image/jpeg"}
3: {success_action_status: "200"}
4: {key: "4cb34913-f9dc-40db-aecc-a9fdf518a334.jpg"}
5: {x-amz-meta-qqfilename: "f86d03fb-1b62-4073-9458-17e1dfd8b3ae.jpg"}
As you can see, no credentials. Here is my client-side options code:
var uploader = new qq.s3.FineUploader({
debug: true,
element: document.getElementById('uploader'),
request: {
endpoint: 'menu-translator.s3.amazonaws.com',
accessKey: 'mykey'
},
signature: {
endpoint: '/s3signaturehandler'
},
iframeSupport: {
localBlankPagePath: '/views/blankForIE9Support.html'
},
cors: {
expected: true,
sendCredentials: true
},
uploadSuccess: {
endpoint: 'success.html'
}
});
What am I missing here?
I fixed this by altering my options code in one small way:
signature: {
endpoint: '/s3signaturehandler',
version: 4
},
I specified version: 4 in the signature section. Not that this is documented anywhere, but apparently the client-side code uses this as a flag for whether or not to send along the key information needed by the server.

Resources