i faced with some problem:
When I create asset from my nodejs app and then try to get asset list but there is no new asset in the list yet. So I thought that better to subscribe on event to get updates, but can't find how to emit event on asset creation
You can use the runtime emit method to emit events from TP functions. This is described here:
https://hyperledger.github.io/composer/business-network/publishing-events.html
https://hyperledger.github.io/composer/applications/subscribing-to-events.html
Related
I have added an Event Subscription on my Storage Account which will trigger on the Blob Created event. But I want to ensure that the Event grid is only triggered if I upload my blob in a particular Container/directory or it should be uploaded only if it has the mentioned prefix. I have tried using the filters present in Event Grid
Filter Value
/blobServices/default/containers/MyContainer/blobs/Test
/blobServices/default/containers/MyContainer/
but both of the filters doesn't seem to be working as my EventGrid doesn't trigger any events.
PS I have read this documentation
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blob-event-overview?WT.mc_id=AZ-MVP-5003556#filtering-events and still unable to achieve what I require
Make sure your Storage Account is Version 2.(it's really important check it)
if it's V1 you can change it in your Storage Account like this:
use this as a filter for your specific Container.
/blobServices/default/containers/MyContainer/
I am trying to figure out when and how .queued in DreamFactory is fired.
From DreamFactory article,
https://blog.dreamfactory.com/queueing-with-dreamfactory-scripting/
there are 3 events that can be fired after running GET to resource, e.g.:
api/v2/db/_table/<table_name>.get
I understand when Pre-Process event and Post-Process event are fired. But I just can't figure out when .Queued is fired.
As DF is using Laravel in the framework, may be someone can share some idea about how this works.
Starting with release 2.3.0, queued scripts, on the other hand, do not and cannot affect the processing of the original API call. Both the request and response of the event are saved along with the script and queued for later execution. Queued scripts are primarily useful for triggering other workflows that need to be done when an event happens, but not necessarily during the processing of the event.
The queued event, when fired, will save the following into a job that is queued for later processing…
the script identifier
the full request and response of the event
a snapshot of the environment at the time of the API call
Kindly refer to the following references for a better picture
https://blog.dreamfactory.com/queueing-with-dreamfactory-scripting/
http://wiki.dreamfactory.com/DreamFactory/Features/Scripting/Event_Scripting#Queued
Thanks
I'm using Strapi together with a static site generator (Gatsby), and I'm trying to automate the "rebuild" process whenever you make any modifications in the CMS content.
I'm trying to use the lifecycle callbacks mentioned in the Strapi documentation to do this: https://strapi.io/documentation/3.x.x/guides/webhooks.html
The problem is that these callbacks are being called multiple times in different models. For example, the "afterUpdate" callback is getting called 5 times for the 5 models I have.
I only want to execute the build trigger function only once per change, is there a way to do that?
This seems to be the correct behavior of Strapi lifecycle callbacks: https://github.com/strapi/strapi/issues/1153
Actually there is no issue here. In fact when you create an entry, we first create the entry and then update to handle relations. That's why many events are trigger on create entry.
The documentation is misleading and I don't think lifecycle methods should be used to trigger the SSG builds.
A better choice I found is to use the ContentManager.js controller, it's located in: plugins/content-manager/controllers/ContentManager.js
The create, update and delete functions get called only once per request, so this is a better place to trigger a SSG build:
delete: async ctx => {
ctx.body = await strapi.plugins['content-manager'].services['contentmanager'].delete(ctx.params, ctx.request.query);
// This is just a request to another service
// that triggers the SSG build.
await build.triggerSSGBuild();
},
I'm in the process of writing an app to interface with a service running on another machine. When I ask this service for some information, this service adds the requested information to a separate queue, and sends a windows message to the calling application (my application) indicating there is a message waiting in this separate queue which needs to be decoded.
The windows message this service sends is a custom message, defined in the service code as having some constant int value. I've found examples of creating custom events in wxpython, and using TryBefore() and TryAfter() to react to these events in specific ways, but I haven't found any way to associate this NewEvent() with an int value so I can identify it when it comes in, much less any way to determine what an int value of an incoming event is.
Has anyone done this before or know of any functions I'm not aware of? I'm using python 3.6 and wxpython 4.0.
Thanks for your help, everyone.
I think this is what you are looking for: https://wiki.wxpython.org/HookingTheWndProc
When you get the custom message from the hooked WndProc you can either react to it there, or you can turn it into a wx event and send it so it can be caught by binding an event handler like normal. The wx.lib.newevent module has some helpers for creating a custom event class and an event binder. Its use is demonstrated in some of the demo samples and library modules.
When I try to add the trigger, I get the following error:
"There was an error creating the trigger: Configuration is ambiguously defined. Cannot have overlapping suffixes in two rules if the prefixes are overlapping for the same event type."
I'm not sure what's gone wrong here.
One reason could be that some other lambda function previously using the same trigger was deleted. This does not automatically clear the event notification from the S3 side. You have to navigate to the S3 console and manually delete the stale event notifications. Once that is done, you should be able to create the same trigger again for another lambda function.
Navigate to S3 and click on the name of your bucket.
Click on the “Properties” tab.
Scroll down to the Advanced properties and on the events section you should see 1 or more active notifications.
Click on the square to edit notifications and delete the notification that is still listening to your lambda function in error.
Congrats, no more dangling reference!